-
Notifications
You must be signed in to change notification settings - Fork 1
prod : 인증 인가 부분 수정 및 배포 (Redis 구성 공통화) #262
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
MOSU-240 refactor: 반복적인 분당 요청에 대해 제한 시간 증가 및 하루 차단된 사용자들 DB로 영속화
…ng and attachment updates
…lete and create operations into update method
…type mismatch and resource not found responses
…cking and update attachment handling
…cessary request path logging
…entities and requests
MOSU-246 refactor: 공지, 질문 로직 수정
…or cron job execution
…anup logic to use LogCleanupExecutor
…pdate annotations for cron job execution
…atorJob and LogCleanupJob
Quartz 를 통한 스케줄링 공통화 Resolver 구성 ( Configuration 중복 작업 완화)
…zer in spring.factories
…heAtomicOperator beans
…on in ExamQuotaCacheManager
…o refactor/enhanced-redis
캐시 원자 단위 의존성 주입 Resolver 추가 및 동적 Bean 등록
…image build steps
|
Important Review skippedAuto reviews are disabled on base/target branches other than the default branch. Please check the settings in the CodeRabbit UI or the You can disable this status message by setting the ✨ Finishing Touches🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Summary of Changes
Hello @polyglot-k, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
I've made significant improvements and refactorings across several core areas of our system. This pull request focuses on enhancing our CI/CD pipeline for smoother deployments, standardizing and extending our cron job architecture for better maintainability, improving how we manage Redis scripts for exam quota operations, and streamlining attachment management for inquiries and notices. These changes collectively boost the system's maintainability, extensibility, and reliability, particularly for automated deployments, scheduled tasks, and data consistency.
Highlights
- CI/CD and Deployment: Enhanced the GitHub Actions workflow to include source checkout, JDK setup, Gradle caching, external JAR fetching, Docker image build/push, and deployment, and changed the trigger branch to develop.
- Cron Job and Batch Processing: Refactored cron job classes to use Executor naming and interfaces, introduced a custom @CronTarget annotation, and added a new batch archiver for blocked IPs from cache to the database.
- Redis Script and Cache Management: Updated exam quota cache operators to retrieve Lua scripts from a shared map by name, and implemented lazy injection for the cache operator map to resolve potential circular dependencies.
- Attachment Handling Standardization: Introduced a standardized updateAttachment method across inquiry and notice services, ensuring attachments are deleted and recreated atomically during updates. Also, changed notice attachment URLs to use public URLs instead of pre-signed URLs.
- Inquiry/Notice Service Logic: Modified inquiry answer and notice service methods to accept the acting UserJpaEntity for proper auditing and permission checks, and ensured associated attachments are deleted when an inquiry answer is removed.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.
| Feature | Command | Description |
|---|---|---|
| Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
| Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
| Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
| Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments or fill out our survey to provide feedback.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces a major refactoring of cron jobs, Redis usage, and attachment handling, along with a new IP rate-limiting feature. The changes are generally good, improving modularity and using modern Spring features. I've found a few issues: a potential regression in inquiry deletion logic, a validation message mismatch, a typo in a log message, a potential performance issue in a new batch job, and a significant behavioral change in a cron job.
| inquiryAnswerJpaRepository.findByInquiryId(postId).ifPresent(answer -> { | ||
| inquiryAnswerService.deleteInquiryAnswer(postId); | ||
| }); | ||
| inquiryAnswerService.deleteInquiryAnswer(postId); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When deleting an inquiry, it's possible that it doesn't have an answer. The previous implementation handled this case gracefully by using ifPresent. The new implementation directly calls inquiryAnswerService.deleteInquiryAnswer(postId), which will throw an INQUIRY_ANSWER_NOT_FOUND exception if no answer exists. This could prevent inquiries without answers from being deleted.
| inquiryAnswerService.deleteInquiryAnswer(postId); | |
| inquiryAnswerJpaRepository.findByInquiryId(postId).ifPresent(answer -> inquiryAnswerService.deleteInquiryAnswer(postId)); |
|
|
||
| public record InquiryCreateRequest( | ||
|
|
||
| @Size(max = 100, message = "제목은 최대 300자까지 입력 가능합니다.") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is a mismatch between the validation annotation and the description message. The @Size annotation specifies a maximum length of 100 characters, but the message says '제목은 최대 300자까지 입력 가능합니다.' (Title can be up to 300 characters). Please make them consistent.
| @Size(max = 100, message = "제목은 최대 300자까지 입력 가능합니다.") | |
| @Size(max = 300, message = "제목은 최대 300자까지 입력 가능합니다.") |
| public void archive() { | ||
| Map<String, BlockedIpHistory> blockedHistoryMap = blockedHistoryCache.asMap(); | ||
|
|
||
| List<BlockedIpHistoryLogJpaEntity> logs = blockedHistoryMap.values().stream() | ||
| .filter(entry -> entry.getPenaltyLevel() == TimePenalty.LEVEL_5) | ||
| .map(this::createBlockedHistoryLog) | ||
| .toList(); | ||
|
|
||
| if (logs.isEmpty()) { | ||
| log.debug("[BlockedIpArchiver] 저장할 로그가 없음."); | ||
| return; | ||
| } | ||
|
|
||
| for (int i = 0; i < logs.size(); i += BATCH_SIZE) { | ||
| int end = Math.min(i + BATCH_SIZE, logs.size()); | ||
| List<BlockedIpHistoryLogJpaEntity> batch = logs.subList(i, end); | ||
|
|
||
| try { | ||
| blockedIpHistoryLogJpaRepository.saveAllUsingBatch(batch); | ||
| log.debug("[BlockedIpArchiver] 저장 완료: {}개", batch.size()); | ||
| } catch (Exception e) { | ||
| log.error("[BlockedIpArchiver] 저장 실패: {}~{} 인덱스", i, end, e); | ||
| } | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The current implementation of the archive method collects all matching log entries into a list before processing them in batches. If the blockedHistoryCache contains a very large number of entries, this could lead to high memory consumption.
A more memory-efficient approach would be to iterate through the cache entries and build batches on the fly, without creating a large intermediate list.
@Override
public void archive() {
List<BlockedIpHistoryLogJpaEntity> batch = new ArrayList<>(BATCH_SIZE);
int savedCount = 0;
for (BlockedIpHistory history : blockedHistoryCache.asMap().values()) {
if (history.getPenaltyLevel() == TimePenalty.LEVEL_5) {
batch.add(createBlockedHistoryLog(history));
if (batch.size() >= BATCH_SIZE) {
try {
blockedIpHistoryLogJpaRepository.saveAllUsingBatch(batch);
log.debug("[BlockedIpArchiver] 저장 완료: {}개", batch.size());
savedCount += batch.size();
} catch (Exception e) {
log.error("[BlockedIpArchiver] 저장 실패. Batch size: {}", batch.size(), e);
}
batch.clear();
}
}
}
if (!batch.isEmpty()) {
try {
blockedIpHistoryLogJpaRepository.saveAllUsingBatch(batch);
log.debug("[BlockedIpArchiver] 저장 완료: {}개", batch.size());
savedCount += batch.size();
} catch (Exception e) {
log.error("[BlockedIpArchiver] 저장 실패. Batch size: {}", batch.size(), e);
}
}
if (savedCount == 0) {
log.debug("[BlockedIpArchiver] 저장할 로그가 없음.");
}
}| private static class RequestCounter { | ||
| blockedIpCache.invalidate(ip); | ||
| blockedIpCache.put(ip, new BlockedIp(nextPenaltyLevel)); | ||
| log.warn("IP 차단: {}, 차단 레벨: {})", ip, nextPenaltyLevel); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| public void execute(JobExecutionContext ctx) { | ||
| if (domainArchiveExecutors == null || domainArchiveExecutors.isEmpty()) { | ||
| log.info("No domain archivers configured, skipping execution"); | ||
| return; | ||
| } | ||
|
|
||
| for (DomainArchiveExecutor archiver : domainArchiveExecutors) { | ||
| try { | ||
| log.info("Starting archive for {}", archiver.getName()); | ||
| archiver.archive(); | ||
| log.info("Archiving completed for {}", archiver.getName()); | ||
| } catch (Exception e) { | ||
| log.error("Archiving failed for {}", archiver.getName(), e); | ||
| } | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The execution logic for archivers has changed from parallel with delays to sequential. The previous implementation used a ScheduledExecutorService to run archivers with a delay between them, allowing them to run in parallel. The new implementation iterates and executes them sequentially in a single thread. This is a significant behavioral change that could lead to performance issues if any archiver takes a long time to execute, as it will block all subsequent archivers. Please verify if this change was intentional.
This pull request introduces several significant improvements and refactorings across the CI/CD pipeline, cron job architecture, Redis cache operations, and attachment management for inquiries and notices. The main themes are: enhanced deployment automation, standardization and extensibility of cron executors, improved Redis script management, and more robust and reusable attachment update logic.
CI/CD Pipeline Enhancements:
.github/workflows/self-depoly.yamlnow targets thedevelopbranch instead oftestand includes steps for source checkout, JDK setup, Gradle caching, external JAR fetching, Docker image build/push, and deployment, streamlining the deployment process and improving build reproducibility.Cron Job Architecture Refactoring:
Executornaming convention and interface, moving from generic interfaces likeDomainArchiverandLogCleanupto more specificDomainArchiveExecutorandLogCleanupExecutor. Also, replaced Spring stereotypes with a custom@CronTargetannotation for better control and extensibility. [1] [2]BlockedIpBatchAchiever, which archives blocked IP history entries from a Caffeine cache to the database in batches, focusing on entries with the highest penalty level.Redis Cache Operator Improvements:
AtomicExamQuotaDecrementOperatorandAtomicExamQuotaIncrementOperatorto retrieve Lua scripts from a sharedexamLuaScriptsmap by name, rather than direct injection, improving flexibility and maintainability. [1] [2]ExamQuotaCacheManagerto lazily inject theexamQuotaCacheAtomicOperatorMapto resolve potential circular dependencies and to use the correct qualifier.Attachment Management Refactoring:
InquiryAnswerAttachmentService,InquiryAttachmentService, andNoticeAttachmentServiceby introducing anupdateAttachmentmethod that deletes existing attachments before creating new ones, ensuring consistency and reducing code duplication. [1] [2] [3]NoticeAttachmentServicefrom pre-signed URLs to public URLs for improved accessibility.Inquiry/Notice Service Adjustments:
UserJpaEntityas a parameter, ensuring proper auditing and permission checks during create and update operations. Also, ensured that deleting an inquiry answer now also deletes associated attachments. [1] [2] [3] [4] [5]These changes collectively improve the maintainability, extensibility, and reliability of the system, especially around automated deployments, scheduled maintenance tasks, and data consistency for attachments.
Most Important Changes:
CI/CD and Deployment:
develop. (.github/workflows/self-depoly.yaml)Cron Job and Batch Processing:
Executornaming and interfaces, introduced@CronTarget, and added a new batch archiver for blocked IPs from cache to DB. (src/main/java/life/mosu/mosuserver/application/application/cron/ApplicationFailureLogCleanupExecutor.java, src/main/java/life/mosu/mosuserver/application/application/cron/ApplicationFailureLogDomainArchiveExecutor.java, src/main/java/life/mosu/mosuserver/application/caffeine/BlockedIpBatchAchiever.java) [1] [2] [3]Redis Script and Cache Management:
Attachment Handling Standardization:
updateAttachmentmethod for inquiries and notices, ensuring attachments are deleted and recreated atomically during updates. (src/main/java/life/mosu/mosuserver/application/inquiry/InquiryAnswerAttachmentService.java, src/main/java/life/mosu/mosuserver/application/inquiry/InquiryAttachmentService.java, src/main/java/life/mosu/mosuserver/application/notice/NoticeAttachmentService.java) [1] [2] [3]Inquiry/Notice Service Logic: