Skip to content

[Feat] 매거진 크롤링#77

Merged
marshmallowing merged 7 commits intodevelopfrom
feature/54
Jan 30, 2026
Merged

[Feat] 매거진 크롤링#77
marshmallowing merged 7 commits intodevelopfrom
feature/54

Conversation

@marshmallowing
Copy link
Copy Markdown
Member

@marshmallowing marshmallowing commented Jan 30, 2026

🔍️ 작업 내용

✨ 상세 설명

  • Jsoup 기반 크롤링: 네이버 뉴스 검색 결과를 최신순(sort=0)으로 파싱하여 매거진 데이터 수집 로직 구현
  • 중복 필터링: DB에 저장된 기존 URL과의 비교(findAllUrlsByUrlIn)를 통해 이미 수집된 기사는 중복 저장되지 않도록 처리
  • 스케줄러 등록: 매주 월요일 새벽 3시(0 0 3 ? * MON)에 자동 실행되도록 설정
  • 비동기 수동 API: 데이터 추가가 필요할 때를 대비해 관리자용 수동 크롤링 API 제공
    • @Hidden 어노테이션을 사용하여 Swagger에는 노출되지 않도록 처리
    • 비동기 방식으로 구현하여 클라이언트 타임아웃 방지
  • 도메인 엔티티 확장: 매거진 커뮤니티 기능을 위한 신규 테이블 설계 및 연관관계 매핑
    • MagazineBookmark: 사용자별 관심 매거진 저장
    • MagazineCommentLike: 매거진 댓글 공감 기능

🛠️ 추후 리팩토링 및 고도화 계획

  • 콘텐츠 파싱 확장: 네이버 뉴스 외에 다양한 패션 전문 웹진(무신사, 온큐레이션 등)으로 수집 채널 확장
  • 성능 최적화

📸 스크린샷 (선택)

스크린샷 2026-01-30 오후 3 09 51 스크린샷 2026-01-30 오후 3 10 20

💬 리뷰 요구사항

Summary by CodeRabbit

  • 새로운 기능

    • 매거진 추천, 카테고리별 목록 및 상세 조회(조회수 집계) 제공
    • 관심 매거진 북마크 추가/해제 및 북마크 목록 조회
    • 매거진 댓글 작성, 댓글 목록, 댓글 좋아요 토글 기능
    • 수동/정기 자동 크롤링으로 매거진 콘텐츠 수집 및 주기적 업데이트
  • 변경

    • 일부 작성/조작(북마크·댓글·좋아요) API는 인증 필요로 변경
  • 잡무

    • 크롤링 관련 런타임 라이브러리 추가

✏️ Tip: You can customize this high-level summary in your review settings.

@marshmallowing marshmallowing self-assigned this Jan 30, 2026
@marshmallowing marshmallowing added the ✨ Feature 기능 개발 label Jan 30, 2026
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Jan 30, 2026

Caution

Review failed

The pull request is closed.

Walkthrough

매거진 크롤링(네이버 뉴스) 및 커뮤니티(북마크, 댓글, 댓글 좋아요) 기능을 추가하고 스케줄링, 파서, 엔티티, 서비스, 컨트롤러, 레포지토리, DTO 및 의존성(jsoup) 설정을 도입했습니다.

Changes

Cohort / File(s) Summary
빌드·스케줄링
build.gradle, src/main/java/com/ongil/backend/Application.java
jsoup 의존성 추가(org.jsoup:jsoup:1.17.2) 및 Spring 스케줄링 활성화(@EnableScheduling).
컨트롤러
src/main/java/com/ongil/backend/domain/magazine/controller/MagazineController.java
매거진 추천/카테고리/상세, 북마크 토글·조회, 댓글 생성/조회, 댓글 좋아요 토글, 수동 크롤링 엔드포인트 추가(인증 적용 일부).
엔티티 — 매거진 관련
src/main/java/com/ongil/backend/domain/magazine/entity/Magazine.java, .../MagazineBookmark.java, .../MagazineComment.java, .../MagazineCommentLike.java
Magazine에 url(unique), publishedAt, press 추가; 기존 imageUrls 제거; viewCount 증가 메서드 추가. 북마크 및 댓글좋아요용 엔티티와 likeCount 필드 및 증감 메서드 추가.
DTO 및 Converter
src/main/java/com/ongil/backend/domain/magazine/dto/.../*, src/main/java/com/ongil/backend/domain/magazine/converter/*
MagazineResDto, CommentResDto 레코드(빌더/스키마 어노테이션), CommentReqDto(검증) 및 Magazine/Comment 변환 유틸(@UtilityClass) 추가.
파서·크롤러
src/main/java/com/ongil/backend/domain/magazine/parser/MagazineArticleParser.java, .../NaverNewsParser.java, src/main/java/com/ongil/backend/domain/magazine/service/MagazineCrawlingService.java
파서 인터페이스와 네이버 뉴스 파서 구현(JSoup 사용), 검색 결과 링크 추출·중복 제거·기사 파싱 및 DB 저장 흐름 구현.
서비스·스케줄러
src/main/java/com/ongil/backend/domain/magazine/service/MagazineService.java, .../MagazineCommentService.java, .../MagazineCrawlingService.java, .../scheduler/MagazineScheduler.java
매거진 조회/상세(조회수 증가), 북마크 토글, 북마크 목록, 댓글 생성/조회, 댓글 좋아요 토글, 주간(월 03:00) 전체 카테고리 자동 크롤링 스케줄러 추가.
레포지토리
src/main/java/com/ongil/backend/domain/magazine/repository/*
MagazineRepository: 랜덤 추천·카테고리 정렬·URL 존재 조회 추가. MagazineCommentRepository: JOIN FETCH 기반 댓글 조회 메서드 추가. MagazineBookmarkRepository, CommentLikeRepository 추가.
글로벌·보안·예외
src/main/java/com/ongil/backend/global/common/exception/ErrorCode.java, src/main/java/com/ongil/backend/global/config/SecurityConfig.java
ErrorCode에 MAGAZINE_NOT_FOUND, COMMENT_NOT_FOUND 추가. SecurityConfig에서 일부 /api/magazines/** 엔드포인트 공개/인증 규칙 조정.

Sequence Diagram(s)

sequenceDiagram
    actor Client
    participant Controller as MagazineController
    participant Service as MagazineService
    participant Repo as MagazineRepository
    participant DB as Database

    rect rgba(100, 150, 200, 0.5)
    Client->>Controller: GET /api/magazines/recommended
    Controller->>Service: getRecommendedMagazines()
    Service->>Repo: findRandomRecommended(pageable)
    Repo->>DB: SELECT * FROM magazines ORDER BY RAND()
    DB-->>Repo: magazines
    Repo-->>Service: magazines
    Service->>Service: MagazineConverter.from()
    Service-->>Controller: List<MagazineResDto>
    Controller-->>Client: DataResponse<List<MagazineResDto>>
    end
Loading
sequenceDiagram
    participant Scheduler as MagazineScheduler
    participant CrawlService as MagazineCrawlingService
    participant Parser as NaverNewsParser
    participant Jsoup as Jsoup
    participant Repo as MagazineRepository
    participant DB as Database

    rect rgba(150, 200, 100, 0.5)
    Scheduler->>CrawlService: weeklyCrawlMagazines()
    loop for each category
      CrawlService->>CrawlService: getSearchQueriesByCategory()
      loop pages (MAX_PAGES)
        CrawlService->>Jsoup: connect(searchUrl).get()
        Jsoup-->>CrawlService: Document
        CrawlService->>CrawlService: extractArticleLinks()
        loop each articleUrl
          CrawlService->>Parser: parse(url, category)
          Parser->>Jsoup: connect(url).get()
          Jsoup-->>Parser: Document
          Parser->>Parser: parse title/content/metadata
          Parser-->>CrawlService: Optional<Magazine>
        end
      end
      CrawlService->>Repo: findAllUrlsByUrlIn(candidateUrls)
      Repo->>DB: SELECT url FROM magazines WHERE url IN (...)
      DB-->>Repo: existingUrls
      CrawlService->>CrawlService: filter new magazines
      CrawlService->>Repo: saveAll(newMagazines)
      Repo->>DB: INSERT INTO magazines
    end
    CrawlService-->>Scheduler: 완료
    end
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Possibly related PRs

  • [Feat] 매거진 크롤링 #77 — 동일 기능(매거진 크롤링·커뮤니티) 및 jsoup·파서·서비스·컨트롤러 변경과 높은 코드 중복/연관성 존재.

Suggested reviewers

  • neibler
  • kangcheolung
🚥 Pre-merge checks | ✅ 4 | ❌ 1
❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 2.22% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (4 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed PR 제목은 매거진 크롤링 기능 구현이라는 주요 변경 사항을 명확하게 요약하고 있으며, 실제 변경 사항과 일치합니다.
Linked Issues check ✅ Passed PR은 네이버 뉴스 기반 크롤링, 스케줄러, 커뮤니티 기능(북마크, 댓글, 좋아요) 등 연결 이슈 #54의 요구사항을 코드 구현으로 충족하고 있습니다.
Out of Scope Changes check ✅ Passed jsoup 의존성 추가, @EnableScheduling 어노테이션, SecurityConfig 변경 등 모든 변경사항이 크롤링 및 커뮤니티 기능 구현의 필수 요소로 범위 내에 있습니다.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch feature/54

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 11

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
src/main/java/com/ongil/backend/domain/magazine/entity/Magazine.java (1)

54-58: ⚠️ Potential issue | 🟠 Major

@Builder 사용 시 viewCountcomments 초기값이 무시됩니다.

@Builder는 필드의 인라인 초기화를 무시합니다. NaverNewsParser에서 viewCount(0)을 명시적으로 설정하고 있어 현재는 동작하지만, comments 리스트는 빌더로 생성 시 null이 됩니다.

🐛 `@Builder.Default` 적용
-	`@Column`(name = "view_count", nullable = false)
-	private Integer viewCount = 0;
+	`@Builder.Default`
+	`@Column`(name = "view_count", nullable = false)
+	private Integer viewCount = 0;

 	`@OneToMany`(mappedBy = "magazine")
-	private List<MagazineComment> comments = new ArrayList<>();
+	`@Builder.Default`
+	private List<MagazineComment> comments = new ArrayList<>();
🤖 Fix all issues with AI agents
In
`@src/main/java/com/ongil/backend/domain/magazine/controller/MagazineController.java`:
- Around line 61-67: The toggleBookmark endpoint is currently reachable without
authentication due to SecurityConfig permitting "/api/magazines/**", causing
`@AuthenticationPrincipal` userId to be null and downstream USER_NOT_FOUND errors;
fix this by adding method-level security to the controller: annotate the
toggleBookmark method with `@PreAuthorize`("isAuthenticated()") (ensure the
PreAuthorize import is added) so requests must be authenticated before entering
MagazineController.toggleBookmark, or alternatively change the SecurityConfig
rule for "/api/magazines/**" to .authenticated() if you prefer a config-level
fix.
- Around line 103-118: The manual crawl endpoint
(MagazineController.manualCrawl) is currently publicly reachable and uses
CompletableFuture.runAsync without the application's taskExecutor; update
SecurityConfig to require authentication for the specific path
"/api/magazines/crawl" (and optionally restrict to admin role), and change the
async call in MagazineController.manualCrawl to use the existing taskExecutor
bean (inject the TaskExecutor/Executor used in AsyncConfig and pass it as the
second argument to CompletableFuture.runAsync); optionally annotate manualCrawl
with a method-level security annotation (e.g.,
`@PreAuthorize`("hasRole('ADMIN')")) after configuring role-based rules in
SecurityConfig.

In
`@src/main/java/com/ongil/backend/domain/magazine/converter/MagazineConverter.java`:
- Around line 10-20: The MagazineConverter.from method calls
magazine.getPublishedAt().toString() which can throw NPE if publishedAt is null;
update MagazineConverter.from to perform a null-safe conversion for publishedAt
(e.g., check magazine.getPublishedAt() and pass either its toString() or
null/empty string) when building MagazineResDto via MagazineResDto.builder so
the DTO creation won't NPE on null publishedAt.

In
`@src/main/java/com/ongil/backend/domain/magazine/dto/response/MagazineResDto.java`:
- Around line 9-38: Add a viewCount field to the MagazineResDto record and map
it in the converter: update MagazineResDto to include a Long viewCount (with a
Schema description and appropriate nullability similar to Product responses),
then modify MagazineConverter.from(Magazine) to set
.viewCount(magazine.getViewCount()) so the DTO reflects the Magazine entity's
viewCount used by getMagazinesByCategory and addViewCount().

In
`@src/main/java/com/ongil/backend/domain/magazine/entity/MagazineBookmark.java`:
- Around line 10-29: Add a uniqueness constraint on the MagazineBookmark entity
to prevent duplicate bookmarks by the same user and mark the relationships as
non-nullable: update the `@Table` on class MagazineBookmark to include a
uniqueConstraints entry for the (user_id, magazine_id) pair and set nullable =
false on both `@JoinColumn` annotations for the user and magazine fields so the
database enforces one bookmark per (user, magazine) and required relationships.

In
`@src/main/java/com/ongil/backend/domain/magazine/entity/MagazineCommentLike.java`:
- Around line 31-37: The user_id and magazine_comment_id foreign keys in
MagazineCommentLike are currently nullable; make them non-nullable by updating
the entity: set nullable=false on the `@JoinColumn` for the user and comment
fields (and optionally add a javax.validation `@NotNull` on the User user and
MagazineComment comment fields) so JPA/DDL will enforce NOT NULL, and also add a
database migration (or alter table) to convert existing nulls and alter the
columns to NOT NULL to keep schema and data consistent.

In `@src/main/java/com/ongil/backend/domain/magazine/parser/NaverNewsParser.java`:
- Line 24: Thread.sleep(800 + (long)(Math.random() * 500)) in NaverNewsParser is
swallowing InterruptedException and blocking the caller; change it to handle
interruption properly by catching InterruptedException separately, calling
Thread.currentThread().interrupt() to restore the interrupt status and either
propagate the interruption (rethrow or return) instead of continuing, and avoid
blocking synchronous request threads by moving the delay to an async executor
(e.g., ScheduledExecutorService or CompletableFuture.delayedExecutor) used by
the parsing method (the method in NaverNewsParser where Thread.sleep is called)
so delays run off-thread.

In
`@src/main/java/com/ongil/backend/domain/magazine/service/MagazineCommentService.java`:
- Around line 54-59: getComments currently returns an empty list for a
non-existent magazineId; add a presence check for the magazine before querying
comments (use magazineRepository.existsById or findById) and throw the same
NotFound/EntityNotFound exception used elsewhere in this service when the
magazine does not exist; then proceed to call
commentRepository.findByMagazineIdOrderByCreatedAtDesc(magazineId).stream().map(CommentConverter::from).toList()
so behavior matches other methods that validate magazine existence first.

In
`@src/main/java/com/ongil/backend/domain/magazine/service/MagazineCrawlingService.java`:
- Around line 36-86: The crawlAndSave method is missing a transactional boundary
so magazineRepository.saveAll(newMagazines) can partially persist on failures;
annotate the crawlAndSave method with `@Transactional` to execute the saveAll
within a single transaction (import
org.springframework.transaction.annotation.Transactional if not present) and
ensure the class is a Spring-managed component (e.g., `@Service`) so transactions
are applied; keep the existing exception handling but let transaction rollback
on runtime exceptions by not catching and swallowing them (or rethrow after
logging) to ensure rollback behavior for crawlAndSave and
magazineRepository.saveAll.

In
`@src/main/java/com/ongil/backend/domain/magazine/service/MagazineService.java`:
- Around line 62-81: Add a DB-level uniqueness constraint on MagazineBookmark
for (user_id, magazine_id) (e.g., `@Table`(uniqueConstraints =
`@UniqueConstraint`(...)) in the MagazineBookmark entity) and update
MagazineService.toggleBookmark to catch DataIntegrityViolationException around
bookmarkRepository.save(...) to treat a duplicate-insert as "already bookmarked"
(return false or handle idempotently); alternatively apply pessimistic locking
on the lookup by annotating the repository method findByUserAndMagazine(...)
with `@Lock`(LockModeType.PESSIMISTIC_WRITE) to prevent concurrent inserts —
implement one of these approaches and ensure the error handling around
bookmarkRepository.save in toggleBookmark uses the chosen strategy.

In `@src/main/java/com/ongil/backend/global/config/SecurityConfig.java`:
- Around line 80-81: SecurityConfig currently calls
.requestMatchers("/api/magazines/**").permitAll() which overexposes protected
endpoints; remove that blanket rule and replace it with method- and
path-specific rules inside the authorize configuration: require authentication
(e.g., .authenticated()) for POST /magazines/{id}/bookmark, GET
/magazines/bookmarks, POST /magazines/comments/{id}, POST
/magazines/{commentId}/like, and require admin role (e.g., .hasRole("ADMIN"))
for POST /api/magazines/crawl; keep only truly public GET magazine listing
endpoints as permitAll if desired. Locate this change in the SecurityConfig
class where requestMatchers/permitAll are configured and add the
HttpMethod-specific requestMatchers for the exact paths listed instead of the
wildcard.
🧹 Nitpick comments (13)
src/main/java/com/ongil/backend/global/common/exception/ErrorCode.java (1)

61-63: 에러 코드 번호 불일치 확인 필요

COMMENT_NOT_FOUND의 코드가 "COMM-002"로 시작되어 "COMM-001"이 누락된 것처럼 보입니다. 의도적인 설계라면 무시해도 되지만, 오타라면 "COMM-001"로 수정하거나 향후 추가될 에러 코드를 위해 예약한 것인지 확인이 필요합니다.

또한, 다른 도메인들은 전체 이름(USER, PRODUCT, BRAND 등)을 사용하는데 MAG, COMM은 약어를 사용합니다. 일관성 측면에서 MAGAZINE-001, COMMENT-001 형태를 고려해보세요.

♻️ 제안 수정
 	// MAGAZINE
-	MAGAZINE_NOT_FOUND(HttpStatus.NOT_FOUND, "매거진을 찾을 수 없습니다.", "MAG-001"),
-	COMMENT_NOT_FOUND(HttpStatus.NOT_FOUND, "댓글을 찾을 수 없습니다.", "COMM-002"),
+	MAGAZINE_NOT_FOUND(HttpStatus.NOT_FOUND, "매거진을 찾을 수 없습니다.", "MAGAZINE-001"),
+	COMMENT_NOT_FOUND(HttpStatus.NOT_FOUND, "댓글을 찾을 수 없습니다.", "COMMENT-001"),
src/main/java/com/ongil/backend/domain/magazine/repository/MagazineRepository.java (2)

18-19: ORDER BY RAND() 성능 주의

네이티브 쿼리의 ORDER BY RAND()는 MySQL에서 전체 테이블 스캔 후 각 행에 랜덤 값을 부여하여 정렬합니다. 매거진 데이터가 증가하면 성능 저하가 발생할 수 있습니다.

현재 초기 단계에서는 문제가 없겠지만, 데이터 증가 시 다음 대안을 고려해보세요:

  • 애플리케이션 레벨에서 ID 범위 내 랜덤 선택
  • 캐싱을 통한 추천 목록 재사용

15-16: @Repository 어노테이션 불필요 (선택적 리팩터)

JpaRepository를 상속하면 Spring Data JPA가 자동으로 빈을 등록하므로 @Repository 어노테이션은 불필요합니다. 해롭지는 않지만 코드 간결성을 위해 제거해도 됩니다.

build.gradle (1)

70-72: jsoup 의존성 버전 확인

jsoup 1.17.2는 보안 취약점이 없는 안정적인 버전입니다. 다만 2023년 12월 릴리스로, 현재(2026년 1월) 최신 버전은 1.22.1입니다. 필요에 따라 최신 버전으로의 업그레이드를 검토해보세요.

src/main/java/com/ongil/backend/domain/magazine/converter/CommentConverter.java (1)

10-19: User 연관관계 접근 시 NPE 방어 코드 고려

comment.getUser() 체인 접근(getId(), getName(), getProfileImg())에서 user가 null인 경우 NPE가 발생합니다. 정상적인 비즈니스 로직에서는 댓글에 반드시 사용자가 있어야 하지만, 데이터 무결성 문제나 향후 리팩토링 시 예상치 못한 상황이 발생할 수 있습니다.

MagazineConverter와 동일하게 createdAt도 null 체크를 고려해 주세요.

🛡️ 방어적 코딩 예시
 public static CommentResDto from(MagazineComment comment) {
+	User user = comment.getUser();
 	return CommentResDto.builder()
 		.commentId(comment.getId())
-		.userId(comment.getUser().getId())
-		.userName(comment.getUser().getName())
-		.userProfileImage(comment.getUser().getProfileImg())
+		.userId(user != null ? user.getId() : null)
+		.userName(user != null ? user.getName() : null)
+		.userProfileImage(user != null ? user.getProfileImg() : null)
 		.content(comment.getContent())
 		.likeCount(comment.getLikeCount())
-		.createdAt(comment.getCreatedAt().toString())
+		.createdAt(comment.getCreatedAt() != null ? comment.getCreatedAt().toString() : null)
 		.build();
 }
src/main/java/com/ongil/backend/domain/magazine/scheduler/MagazineScheduler.java (1)

22-37: 크롤링 실패 시에도 "완료" 로그가 출력되는 문제

일부 카테고리 크롤링이 실패해도 "크롤링 완료" 로그가 출력되어 모니터링 시 혼란을 줄 수 있습니다. 성공/실패 카운트를 추적하여 실제 결과를 반영하는 것이 좋습니다.

📊 개선된 로깅 제안
 private void crawlAll() {
+	int successCount = 0;
+	int failCount = 0;
 	for (MagazineCategory category : MagazineCategory.values()) {
 		try {
 			magazineCrawlingService.crawlAndSave(category);
+			successCount++;
 		} catch (Exception e) {
 			log.error("카테고리 크롤링 실패: {}", category, e);
+			failCount++;
 		}
 	}
+	log.info("크롤링 결과 - 성공: {}, 실패: {}", successCount, failCount);
 }

그리고 weeklyCrawlMagazines() 메서드에서:

 `@Scheduled`(cron = "0 0 3 ? * MON")
 public void weeklyCrawlMagazines() {
 	log.info("매거진 주간 크롤링 시작");
 	crawlAll();
-	log.info("매거진 주간 크롤링 완료");
+	log.info("매거진 주간 크롤링 종료");
 }
src/main/java/com/ongil/backend/domain/magazine/entity/MagazineComment.java (1)

41-42: likeCount 동시성 관련 잠재적 race condition 가능성을 확인해 주세요.

여러 사용자가 동시에 좋아요/취소를 할 경우, increaseLikeCount()decreaseLikeCount()가 동시에 호출되면 lost update가 발생할 수 있습니다. 서비스 레이어에서 @Transactional로 보호되지만, 높은 동시성 환경에서는 비관적 락(@Lock(PESSIMISTIC_WRITE)) 또는 @Version을 통한 낙관적 락 적용을 검토하실 수 있습니다.

🔒 낙관적 락 적용 예시
+	`@Version`
+	private Long version;
+
 	public void increaseLikeCount() { this.likeCount++; }
 	public void decreaseLikeCount() { if(this.likeCount > 0) this.likeCount--; }
src/main/java/com/ongil/backend/domain/magazine/service/MagazineCommentService.java (1)

42-47: likeCount(0) 설정은 @Builder.Default로 인해 중복됩니다.

MagazineComment 엔티티에서 @Builder.DefaultlikeCount = 0이 이미 설정되어 있으므로, 빌더에서 명시적으로 지정할 필요가 없습니다. 제거해도 무방하나, 명시적 의도 표현으로 두어도 큰 문제는 없습니다.

src/main/java/com/ongil/backend/domain/magazine/entity/Magazine.java (1)

14-19: url 필드에 unique 제약이 중복 정의되어 있습니다.

@Table(uniqueConstraints = {...})@Column(unique = true)가 모두 적용되어 있습니다. 동일한 제약이 두 번 생성될 수 있으므로 하나만 유지하는 것이 좋습니다. 명시적 제약 이름을 위해 @Table 수준 정의를 권장합니다.

♻️ 중복 제약 제거
-	`@Column`(unique = true)
+	`@Column`
 	private String url;

Also applies to: 37-38

src/main/java/com/ongil/backend/domain/magazine/service/MagazineCrawlingService.java (2)

73-75: 대량 URL에 대한 IN 쿼리 성능 고려가 필요합니다.

candidates가 많을 경우 findAllUrlsByUrlIn의 IN 절이 DB 한계(Oracle 1000개, MySQL 무제한이나 성능 저하)에 도달할 수 있습니다. 배치 처리를 고려해 주세요.

♻️ 배치 처리 예시
// IN 쿼리를 청크 단위로 분할
private static final int BATCH_SIZE = 500;

List<String> candidateUrls = candidates.stream().map(Magazine::getUrl).toList();
Set<String> existUrls = new HashSet<>();

for (int i = 0; i < candidateUrls.size(); i += BATCH_SIZE) {
    List<String> batch = candidateUrls.subList(i, Math.min(i + BATCH_SIZE, candidateUrls.size()));
    existUrls.addAll(magazineRepository.findAllUrlsByUrlIn(batch));
}

88-95: 검색 쿼리가 하드코딩되어 있습니다.

카테고리별 검색어가 코드에 직접 포함되어 있어 변경 시 재배포가 필요합니다. 설정 파일이나 DB로 외부화하면 유연성이 향상됩니다.

src/main/java/com/ongil/backend/domain/magazine/service/MagazineService.java (2)

52-58: 동시 조회 시 조회수 증가 누락 가능성이 있습니다.
동시 요청에서 addViewCount()가 경쟁 상태로 일부 증가가 유실될 수 있으니, DB 원자적 업데이트(예: update 쿼리)나 낙관적 락(@Version) 도입을 고려해 주세요.


84-96: 북마크 목록 조회에 페이징/상한을 고려해 주세요.
사용자가 많아질수록 응답 크기와 메모리 사용량이 커질 수 있어, PageRequest 기반 페이징 또는 limit 도입을 권장합니다.

Comment on lines +103 to +118
@Hidden
@PostMapping("/crawl")
@Operation(summary = "매거진 수동 크롤링", description = "매거진 데이터가 부족할 경우를 위한 api")
public DataResponse<String> manualCrawl(@RequestParam(required = false) MagazineCategory category) {
CompletableFuture.runAsync(() -> {
if (category != null) {
crawlingService.crawlAndSave(category);
} else {
// 카테고리 미지정 시 전체 크롤링
for (MagazineCategory cat : MagazineCategory.values()) {
crawlingService.crawlAndSave(cat);
}
}
});

return DataResponse.from("수동 크롤링 시작");
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Find and read the MagazineController file
find . -path "*/src/main/java/com/ongil/backend/domain/magazine/controller/MagazineController.java" -type f

Repository: IT-Cotato/12th-OnGil-BE

Length of output: 151


🏁 Script executed:

#!/bin/bash
# Check if file exists and get line count
if [ -f "src/main/java/com/ongil/backend/domain/magazine/controller/MagazineController.java" ]; then
  wc -l "src/main/java/com/ongil/backend/domain/magazine/controller/MagazineController.java"
fi

Repository: IT-Cotato/12th-OnGil-BE

Length of output: 153


🏁 Script executed:

#!/bin/bash
# Read the MagazineController file to verify the code at lines 103-118
if [ -f "src/main/java/com/ongil/backend/domain/magazine/controller/MagazineController.java" ]; then
  sed -n '1,150p' "src/main/java/com/ongil/backend/domain/magazine/controller/MagazineController.java" | cat -n
fi

Repository: IT-Cotato/12th-OnGil-BE

Length of output: 5968


🏁 Script executed:

#!/bin/bash
# Search for Spring Security configuration in the project
fd -t f -e java | xargs rg -l "@EnableWebSecurity|SpringSecurityConfig|SecurityConfig" | head -20

Repository: IT-Cotato/12th-OnGil-BE

Length of output: 132


🏁 Script executed:

#!/bin/bash
# Search for TaskExecutor bean definitions in the project
fd -t f -e java | xargs rg -l "TaskExecutor|@Bean.*TaskExecutor" | head -20

Repository: IT-Cotato/12th-OnGil-BE

Length of output: 129


🏁 Script executed:

#!/bin/bash
# Search for `@PreAuthorize` usage in the codebase to understand patterns
fd -t f -e java | xargs rg "@PreAuthorize" | head -20

Repository: IT-Cotato/12th-OnGil-BE

Length of output: 49


🏁 Script executed:

#!/bin/bash
# Read SecurityConfig to understand security setup
if [ -f "src/main/java/com/ongil/backend/global/config/SecurityConfig.java" ]; then
  cat -n "src/main/java/com/ongil/backend/global/config/SecurityConfig.java"
fi

Repository: IT-Cotato/12th-OnGil-BE

Length of output: 4502


🏁 Script executed:

#!/bin/bash
# Read AsyncConfig to check TaskExecutor configuration
if [ -f "src/main/java/com/ongil/backend/global/config/AsyncConfig.java" ]; then
  cat -n "src/main/java/com/ongil/backend/global/config/AsyncConfig.java"
fi

Repository: IT-Cotato/12th-OnGil-BE

Length of output: 934


🏁 Script executed:

#!/bin/bash
# Search for ROLE_ADMIN or admin patterns in the codebase
rg "ROLE_ADMIN|hasRole.*ADMIN" | head -20

Repository: IT-Cotato/12th-OnGil-BE

Length of output: 49


🏁 Script executed:

#!/bin/bash
# Search for `@PostMapping` with security restrictions
rg "@PostMapping" -A 3 | grep -A 3 "Secured\|PreAuthorize\|RolesAllowed" | head -30

Repository: IT-Cotato/12th-OnGil-BE

Length of output: 49


🏁 Script executed:

#!/bin/bash
# Search for role/authority patterns in JWT implementation
fd -t f -e java -path "*/security/*" | xargs cat | grep -i "role\|authority" | head -20

Repository: IT-Cotato/12th-OnGil-BE

Length of output: 235


🏁 Script executed:

#!/bin/bash
# Check JWT payload or claims handling
rg "getAuthorities|getRoles|GrantedAuthority" | head -20

Repository: IT-Cotato/12th-OnGil-BE

Length of output: 399


🏁 Script executed:

#!/bin/bash
# Search for any ADMIN usage pattern in the codebase
rg "admin|ADMIN" -i | grep -v "test\|comment" | head -30

Repository: IT-Cotato/12th-OnGil-BE

Length of output: 177


수동 크롤링 엔드포인트는 관리자 전용으로 보호되어야 합니다.

현재 SecurityConfig에서 /api/magazines/**permitAll()로 설정되어 있어 이 엔드포인트가 공개되어 있습니다. @Hidden은 Swagger 문서에서만 숨길 뿐 보안을 제공하지 않습니다. 또한 CompletableFuture.runAsync()가 명시적 executor 없이 공용 ForkJoinPool을 사용해 예외가 누락될 수 있으며, 이미 존재하는 taskExecutor 빈을 활용하지 않고 있습니다.

필수 개선 사항:

  1. SecurityConfig에서 /api/magazines/crawl 경로를 명시적으로 인증 필요 상태로 제한
  2. CompletableFuture.runAsync()의 두 번째 인자로 taskExecutor 빈 주입 (AsyncConfig에 이미 정의됨)
  3. 필요시 추후 admin 역할 기반 접근 제어 추가
🔧 권장 수정
 import org.springframework.core.task.TaskExecutor;
+import org.springframework.security.config.annotation.web.builders.HttpSecurity;
 ...
 public class MagazineController {
 	private final MagazineService magazineService;
 	private final MagazineCommentService commentService;
 	private final MagazineCrawlingService crawlingService;
+	private final TaskExecutor taskExecutor;
 ...
 	`@Hidden`
 	`@PostMapping`("/crawl")
 	`@Operation`(summary = "매거진 수동 크롤링", description = "매거진 데이터가 부족할 경우를 위한 api")
 	public DataResponse<String> manualCrawl(`@RequestParam`(required = false) MagazineCategory category) {
-		CompletableFuture.runAsync(() -> {
+		CompletableFuture.runAsync(() -> {
 			if (category != null) {
 				crawlingService.crawlAndSave(category);
 			} else {
 				// 카테고리 미지정 시 전체 크롤링
 				for (MagazineCategory cat : MagazineCategory.values()) {
 					crawlingService.crawlAndSave(cat);
 				}
 			}
-		});
+		}, taskExecutor);
 
 		return DataResponse.from("수동 크롤링 시작");
 	}

SecurityConfig에서도 다음과 같이 명시적 규칙 추가:

 .authorizeHttpRequests(auth -> auth
     // ...
-    .requestMatchers("/api/magazines/**").permitAll()
+    .requestMatchers("/api/magazines/crawl").authenticated()  // 크롤링은 인증 필요
+    .requestMatchers("/api/magazines/**").permitAll()  // 나머지는 공개
🤖 Prompt for AI Agents
In
`@src/main/java/com/ongil/backend/domain/magazine/controller/MagazineController.java`
around lines 103 - 118, The manual crawl endpoint
(MagazineController.manualCrawl) is currently publicly reachable and uses
CompletableFuture.runAsync without the application's taskExecutor; update
SecurityConfig to require authentication for the specific path
"/api/magazines/crawl" (and optionally restrict to admin role), and change the
async call in MagazineController.manualCrawl to use the existing taskExecutor
bean (inject the TaskExecutor/Executor used in AsyncConfig and pass it as the
second argument to CompletableFuture.runAsync); optionally annotate manualCrawl
with a method-level security annotation (e.g.,
`@PreAuthorize`("hasRole('ADMIN')")) after configuring role-based rules in
SecurityConfig.

Comment on lines +9 to +38
public record MagazineResDto(

@Schema(description = "매거진 ID")
@NotNull
Long id,

@Schema(description = "매거진 제목")
@NotNull
String title,

@Schema(description = "매거진 내용")
String content,

@Schema(description = "카테고리 (가격, 체형, 소재, 색상)")
@NotNull
String category,

@Schema(description = "썸네일 이미지 URL")
String thumbnailUrl,

@Schema(description = "원문 뉴스 URL")
@NotNull
String originalUrl,

@Schema(description = "언론사명")
String press,

@Schema(description = "발행 시간")
String publishedAt
) {
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

# Search for viewCount usage across the codebase
rg -n "viewCount" --type java -A 2 -B 2

Repository: IT-Cotato/12th-OnGil-BE

Length of output: 12304


🏁 Script executed:

# Find the Magazine entity to verify the field exists
fd -e java -path "*/domain/magazine/*" | xargs grep -l "class Magazine"

Repository: IT-Cotato/12th-OnGil-BE

Length of output: 237


🏁 Script executed:

# Check how MagazineResDto is used in controllers/services
rg -n "MagazineResDto" --type java -A 3 -B 3

Repository: IT-Cotato/12th-OnGil-BE

Length of output: 11865


🏁 Script executed:

# Get the complete MagazineConverter.from() method
sed -n '10,25p' src/main/java/com/ongil/backend/domain/magazine/converter/MagazineConverter.java

Repository: IT-Cotato/12th-OnGil-BE

Length of output: 479


🏁 Script executed:

# Also check the full Magazine entity to verify all fields
cat -n src/main/java/com/ongil/backend/domain/magazine/entity/Magazine.java | head -70

Repository: IT-Cotato/12th-OnGil-BE

Length of output: 1915


viewCount 필드를 MagazineResDto에 추가해야 합니다

Magazine 엔티티에 viewCount 필드가 존재하고 addViewCount() 메서드로 조회수를 증가시키고 있으며, getMagazinesByCategory()에서 조회수 기준으로 정렬(OrderByViewCountDescPublishedAtDesc)을 수행하고 있습니다. 하지만 MagazineConverter.from()에서 이 필드를 DTO에 매핑하지 않고 있습니다.

ProductSimpleResponseProductDetailResponse에서는 viewCount를 포함시키고 있으므로, 같은 패턴으로 MagazineResDto에도 추가해야 합니다.

MagazineConverter 수정 예시
public static MagazineResDto from(Magazine magazine) {
    return MagazineResDto.builder()
        .id(magazine.getId())
        .title(magazine.getTitle())
        .content(magazine.getContent())
        .category(magazine.getCategory().name())
        .thumbnailUrl(magazine.getThumbnailImageUrl())
        .originalUrl(magazine.getUrl())
        .press(magazine.getPress())
        .publishedAt(magazine.getPublishedAt().toString())
        .viewCount(magazine.getViewCount())  // 추가
        .build();
}
🤖 Prompt for AI Agents
In
`@src/main/java/com/ongil/backend/domain/magazine/dto/response/MagazineResDto.java`
around lines 9 - 38, Add a viewCount field to the MagazineResDto record and map
it in the converter: update MagazineResDto to include a Long viewCount (with a
Schema description and appropriate nullability similar to Product responses),
then modify MagazineConverter.from(Magazine) to set
.viewCount(magazine.getViewCount()) so the DTO reflects the Magazine entity's
viewCount used by getMagazinesByCategory and addViewCount().

@Override
public Optional<Magazine> parse(String url, MagazineCategory category) {
try {
Thread.sleep(800 + (long)(Math.random() * 500)); // 랜덤 지연
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Thread.sleepInterruptedException을 적절히 처리하지 않습니다.

Thread.sleepInterruptedException을 발생시킬 수 있으며, 현재 catch-all Exception으로 처리되지만 스레드의 인터럽트 상태가 복원되지 않습니다. 또한, 동기 호출 시 요청 스레드를 800~1300ms 동안 블로킹합니다.

🛡️ InterruptedException 처리 개선
 try {
-	Thread.sleep(800 + (long)(Math.random() * 500)); // 랜덤 지연
+	Thread.sleep(800 + (long)(Math.random() * 500));
+} catch (InterruptedException ie) {
+	Thread.currentThread().interrupt();
+	log.warn("크롤링 중 인터럽트 발생: {}", url);
+	return Optional.empty();
+}
+try {
 	Document doc = Jsoup.connect(url)
🤖 Prompt for AI Agents
In `@src/main/java/com/ongil/backend/domain/magazine/parser/NaverNewsParser.java`
at line 24, Thread.sleep(800 + (long)(Math.random() * 500)) in NaverNewsParser
is swallowing InterruptedException and blocking the caller; change it to handle
interruption properly by catching InterruptedException separately, calling
Thread.currentThread().interrupt() to restore the interrupt status and either
propagate the interruption (rethrow or return) instead of continuing, and avoid
blocking synchronous request threads by moving the delay to an async executor
(e.g., ScheduledExecutorService or CompletableFuture.delayedExecutor) used by
the parsing method (the method in NaverNewsParser where Thread.sleep is called)
so delays run off-thread.

Comment on lines +54 to +59
public List<CommentResDto> getComments(Long magazineId) {
return commentRepository.findByMagazineIdOrderByCreatedAtDesc(magazineId)
.stream()
.map(CommentConverter::from)
.toList();
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

getComments에서 매거진 존재 여부 검증이 누락되었습니다.

존재하지 않는 magazineId로 호출 시 빈 리스트가 반환됩니다. 일관된 API 동작을 위해 다른 메서드처럼 매거진 존재 여부를 먼저 확인하는 것이 좋습니다.

🛡️ 매거진 검증 추가 제안
 public List<CommentResDto> getComments(Long magazineId) {
+	if (!magazineRepository.existsById(magazineId)) {
+		throw new EntityNotFoundException(ErrorCode.MAGAZINE_NOT_FOUND);
+	}
 	return commentRepository.findByMagazineIdOrderByCreatedAtDesc(magazineId)
 		.stream()
 		.map(CommentConverter::from)
 		.toList();
 }
🤖 Prompt for AI Agents
In
`@src/main/java/com/ongil/backend/domain/magazine/service/MagazineCommentService.java`
around lines 54 - 59, getComments currently returns an empty list for a
non-existent magazineId; add a presence check for the magazine before querying
comments (use magazineRepository.existsById or findById) and throw the same
NotFound/EntityNotFound exception used elsewhere in this service when the
magazine does not exist; then proceed to call
commentRepository.findByMagazineIdOrderByCreatedAtDesc(magazineId).stream().map(CommentConverter::from).toList()
so behavior matches other methods that validate magazine existence first.

Comment on lines +36 to +86
public void crawlAndSave(MagazineCategory category) {
List<String> searchQueries = getSearchQueriesByCategory(category);
Set<String> visitedUrls = new HashSet<>();
List<Magazine> candidates = new ArrayList<>();

for (String searchQuery : searchQueries) {
log.info("=== 크롤링 시작 | 카테고리: {} | 검색어: {} ===", category, searchQuery);
for (int page = 1; page <= MAX_PAGES; page++) {

int start = 1 + (page - 1) * 10;
String url = String.format(
SEARCH_URL,
URLEncoder.encode(searchQuery, StandardCharsets.UTF_8),
start
);

try {
Document doc = Jsoup.connect(url)
.userAgent("Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36")
.timeout(10000)
.get();

List<String> articleLinks = extractArticleLinks(doc);

for (String articleUrl : articleLinks) {
if (!visitedUrls.add(articleUrl)) continue;

articleParser.parse(articleUrl, category)
.ifPresent(candidates::add);
}

} catch (IOException e) {
log.warn("검색어 [{}] 페이지 {} 크롤링 실패", searchQuery, page, e);
}
}
}

Set<String> existUrls = magazineRepository.findAllUrlsByUrlIn(
candidates.stream().map(Magazine::getUrl).toList()
);

List<Magazine> newMagazines = candidates.stream()
.filter(m -> !existUrls.contains(m.getUrl()))
.toList();

log.info("후보군 개수: {} | DB 중복 제외 후 신규 개수: {}", candidates.size(), newMagazines.size());
magazineRepository.saveAll(newMagazines);

log.info("=== 크롤링 종료 | 카테고리: {} | 저장 {}건 ===",
category, newMagazines.size());
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

crawlAndSave 메서드에 @Transactional이 누락되었습니다.

magazineRepository.saveAll(newMagazines)는 트랜잭션 없이 호출되어 각 엔티티가 개별 트랜잭션으로 저장됩니다. 일부만 저장되는 부분 실패 상황이 발생할 수 있습니다.

🔧 `@Transactional` 추가
+	`@Transactional`
 	public void crawlAndSave(MagazineCategory category) {
 		List<String> searchQueries = getSearchQueriesByCategory(category);
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
public void crawlAndSave(MagazineCategory category) {
List<String> searchQueries = getSearchQueriesByCategory(category);
Set<String> visitedUrls = new HashSet<>();
List<Magazine> candidates = new ArrayList<>();
for (String searchQuery : searchQueries) {
log.info("=== 크롤링 시작 | 카테고리: {} | 검색어: {} ===", category, searchQuery);
for (int page = 1; page <= MAX_PAGES; page++) {
int start = 1 + (page - 1) * 10;
String url = String.format(
SEARCH_URL,
URLEncoder.encode(searchQuery, StandardCharsets.UTF_8),
start
);
try {
Document doc = Jsoup.connect(url)
.userAgent("Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36")
.timeout(10000)
.get();
List<String> articleLinks = extractArticleLinks(doc);
for (String articleUrl : articleLinks) {
if (!visitedUrls.add(articleUrl)) continue;
articleParser.parse(articleUrl, category)
.ifPresent(candidates::add);
}
} catch (IOException e) {
log.warn("검색어 [{}] 페이지 {} 크롤링 실패", searchQuery, page, e);
}
}
}
Set<String> existUrls = magazineRepository.findAllUrlsByUrlIn(
candidates.stream().map(Magazine::getUrl).toList()
);
List<Magazine> newMagazines = candidates.stream()
.filter(m -> !existUrls.contains(m.getUrl()))
.toList();
log.info("후보군 개수: {} | DB 중복 제외 후 신규 개수: {}", candidates.size(), newMagazines.size());
magazineRepository.saveAll(newMagazines);
log.info("=== 크롤링 종료 | 카테고리: {} | 저장 {}건 ===",
category, newMagazines.size());
}
`@Transactional`
public void crawlAndSave(MagazineCategory category) {
List<String> searchQueries = getSearchQueriesByCategory(category);
Set<String> visitedUrls = new HashSet<>();
List<Magazine> candidates = new ArrayList<>();
for (String searchQuery : searchQueries) {
log.info("=== 크롤링 시작 | 카테고리: {} | 검색어: {} ===", category, searchQuery);
for (int page = 1; page <= MAX_PAGES; page++) {
int start = 1 + (page - 1) * 10;
String url = String.format(
SEARCH_URL,
URLEncoder.encode(searchQuery, StandardCharsets.UTF_8),
start
);
try {
Document doc = Jsoup.connect(url)
.userAgent("Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36")
.timeout(10000)
.get();
List<String> articleLinks = extractArticleLinks(doc);
for (String articleUrl : articleLinks) {
if (!visitedUrls.add(articleUrl)) continue;
articleParser.parse(articleUrl, category)
.ifPresent(candidates::add);
}
} catch (IOException e) {
log.warn("검색어 [{}] 페이지 {} 크롤링 실패", searchQuery, page, e);
}
}
}
Set<String> existUrls = magazineRepository.findAllUrlsByUrlIn(
candidates.stream().map(Magazine::getUrl).toList()
);
List<Magazine> newMagazines = candidates.stream()
.filter(m -> !existUrls.contains(m.getUrl()))
.toList();
log.info("후보군 개수: {} | DB 중복 제외 후 신규 개수: {}", candidates.size(), newMagazines.size());
magazineRepository.saveAll(newMagazines);
log.info("=== 크롤링 종료 | 카테고리: {} | 저장 {}건 ===",
category, newMagazines.size());
}
🤖 Prompt for AI Agents
In
`@src/main/java/com/ongil/backend/domain/magazine/service/MagazineCrawlingService.java`
around lines 36 - 86, The crawlAndSave method is missing a transactional
boundary so magazineRepository.saveAll(newMagazines) can partially persist on
failures; annotate the crawlAndSave method with `@Transactional` to execute the
saveAll within a single transaction (import
org.springframework.transaction.annotation.Transactional if not present) and
ensure the class is a Spring-managed component (e.g., `@Service`) so transactions
are applied; keep the existing exception handling but let transaction rollback
on runtime exceptions by not catching and swallowing them (or rethrow after
logging) to ensure rollback behavior for crawlAndSave and
magazineRepository.saveAll.

@marshmallowing marshmallowing merged commit 5789bce into develop Jan 30, 2026
1 check was pending
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

✨ Feature 기능 개발

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Feat] 매거진 크롤링, 커뮤니티 기능

1 participant