Skip to content

Add database models for get tasks API#9

Merged
iamitprakash merged 5 commits intodevelopfrom
feat-get-todo-api-add-models
Mar 12, 2025
Merged

Add database models for get tasks API#9
iamitprakash merged 5 commits intodevelopfrom
feat-get-todo-api-add-models

Conversation

@samarpan1738
Copy link
Contributor

@samarpan1738 samarpan1738 commented Dec 20, 2024

Date: 20 Dec 2024

Developer Name: @Achintya-Chatterjee


Issue Ticket Number

Description

  • Install pydantic which is being used for creating the database models
  • Add task and label database models
  • Add tests for task and label database models

Documentation Updated?

  • Yes
  • No

Under Feature Flag

  • Yes
  • No

Database Changes

  • Yes
  • No

Breaking Changes

  • Yes
  • No

Development Tested?

  • Yes
  • No

Screenshots

Screenshot 1

Test Coverage

Coverage report Screenshot 2024-12-21 at 12 12 37 AM

Additional Notes

The CI is failing right now because there an issue in the workflow. I have raised a PR to fix this. Once it's merged the CI should pass

Summary by CodeRabbit

  • New Features

    • Enhanced task management with additional statuses and priority levels for streamlined organization.
    • Improved label management featuring color coding and detailed metadata for better visual tracking.
    • Introduced new enumerations for task status and priority, along with a document model for structured data representation.
  • Chores

    • Updated system dependencies to boost data validation and overall performance.
    • Expanded coverage exclusion criteria to omit test files from reporting.

@samarpan1738 samarpan1738 changed the title Add models for get tasks API Add database models for get tasks API Dec 20, 2024
Copy link
Member

@iamitprakash iamitprakash left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tests / build (pull_request)Failing after 15s

@rishirishhh rishirishhh self-assigned this Feb 2, 2025
@rishirishhh
Copy link

Tests / build (pull_request)Failing after 15s

Its failing because there is a PR before this that will fix the failing CI
the PR #8 needs to be merged before

@coderabbitai
Copy link

coderabbitai bot commented Mar 8, 2025

Walkthrough

This pull request updates the coverage configuration and dependency list while introducing several new modules for task and label management. It adds enumerations for task statuses and priorities, a base document class with validation (including a custom ObjectId wrapper), and associated models for labels and tasks. Additionally, new fixtures and comprehensive unit tests have been provided to validate the behavior of these models, including error handling and aliasing features.

Changes

File(s) Change Summary
.coveragerc Modified the [coverage:run] section to omit files under */tests/* while retaining manage.py.
requirements.txt Appended four dependencies: annotated-types==0.7.0, pydantic==2.10.1, pydantic_core==2.27.1, and typing_extensions==4.12.2.
todo/constants/task.py Added new enumerations: TaskStatus (with states: TODO, IN_PROGRESS, DEFERRED, BLOCKED, DONE) and TaskPriority (with levels: HIGH, MEDIUM, LOW).
todo/models/common/document.py
todo/models/common/pyobjectid.py
Introduced a new abstract Document class enforcing a static collection_name, and a PyObjectId class extending ObjectId with custom validation methods.
todo/models/label.py Created LabelModel class extending Document, defining attributes for label management.
todo/models/task.py Added DeferredDetailsModel and TaskModel classes for representing task details and overall task entities, including aliasing of the id field.
todo/tests/fixtures/label.py
todo/tests/fixtures/task.py
Introduced fixture data for labels and tasks to support testing scenarios.
todo/tests/unit/models/__init__.py
todo/tests/unit/models/common/__init__.py
Added __init__.py files to enable Django's automatic test detection in the specified directories.
todo/tests/unit/models/common/test_document.py
todo/tests/unit/models/common/test_pyobjectid.py
todo/tests/unit/models/test_label.py
todo/tests/unit/models/test_task.py
Added unit test suites to validate the custom Document behavior, PyObjectId validation logic, and the instantiation and error handling of LabelModel and TaskModel.

Sequence Diagram(s)

sequenceDiagram
    actor Client as "Client"
    participant Subclass as "Document Subclass"
    participant Base as "Document (Base)"
    Client->>Subclass: Instantiate subclass
    Subclass->>Base: __init_subclass__() check for collection_name
    alt Missing/Invalid collection_name
        Base-->>Client: Raise TypeError
    else Valid collection_name
        Base-->>Client: Instance created successfully
    end
Loading
sequenceDiagram
    actor Input as "Input Value"
    participant Validator as "PyObjectId.validate"
    Input->>Validator: Provide value
    Validator->>Validator: Check if value is None
    alt Value is not None
        Validator->>Validator: Validate using ObjectId.is_valid
        alt Valid ObjectId
            Validator-->>Input: Return ObjectId instance
        else Invalid ObjectId
            Validator-->>Input: Raise ValueError
        end
    else
        Validator-->>Input: Return None
    end
Loading

Poem

I'm a rabbit on a coding spree,
Hopping over tests with joyful glee.
New models and enums line my burrow,
With every line, my whiskers glow.
I celebrate changes with a playful hop,
Bunny cheers for code that just won't stop!
🐇✨


📜 Recent review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between f91c23f and da78cc6.

📒 Files selected for processing (3)
  • todo/tests/fixtures/label.py (1 hunks)
  • todo/tests/fixtures/task.py (1 hunks)
  • todo/tests/unit/models/test_task.py (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (3)
  • todo/tests/fixtures/task.py
  • todo/tests/fixtures/label.py
  • todo/tests/unit/models/test_task.py

🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🧹 Nitpick comments (15)
todo/models/common/pyobjectid.py (1)

1-16: Consider simplifying the validation logic and adding docstrings.

The PyObjectId class provides an essential wrapper for MongoDB's ObjectId to work with Pydantic, but it could benefit from some improvements:

  1. The validation logic contains a redundant check on line 13 (value is not None) since you've already handled the None case separately.
  2. The class and methods lack docstrings explaining their purpose and usage.
 from bson import ObjectId


 class PyObjectId(ObjectId):
+    """Custom wrapper for MongoDB ObjectId to work with Pydantic models."""
     @classmethod
     def __get_validators__(cls):
+        """Return a list of validator methods for Pydantic."""
         yield cls.validate

     @classmethod
     def validate(cls, value, field=None):
+        """Validate and convert input to ObjectId.
+        
+        Args:
+            value: The value to validate
+            field: The Pydantic field (unused)
+            
+        Returns:
+            ObjectId instance or None
+            
+        Raises:
+            ValueError: If the value is not a valid ObjectId
+        """
         if value is None:
             return None
-        if value is not None and ObjectId.is_valid(value):
+        if ObjectId.is_valid(value):
             return ObjectId(value)
         raise ValueError(f"Invalid ObjectId: {value}")
todo/constants/task.py (1)

12-16: TaskPriority enum implementation is appropriate

Using numeric values for priorities is a good approach, making it easy to sort tasks by priority level. The implementation is clean and straightforward.

Consider adding docstrings to both enum classes to better document their purpose and usage, especially for other developers who may work with this code in the future:

 class TaskStatus(Enum):
+    """Represents the possible states of a task in the application."""
     TODO = "TODO"
     IN_PROGRESS = "IN_PROGRESS"
     DEFERRED = "DEFERRED"
     BLOCKED = "BLOCKED"
     DONE = "DONE"


 class TaskPriority(Enum):
+    """Represents the priority levels for tasks, with lower numbers indicating higher priority."""
     HIGH = 1
     MEDIUM = 2
     LOW = 3
todo/tests/fixtures/task.py (1)

6-37: Test fixture data looks comprehensive

The fixture provides a good variety of test data with different statuses and priorities. This will be valuable for thorough testing of the task model.

Consider using consistent field naming conventions with MongoDB. In the first task object, you're using "id" while MongoDB typically uses "_id" (which is used in your label fixtures). This inconsistency might cause confusion:

 tasks_db_data = [
     {
-        "id": ObjectId("672f7c5b775ee9f4471ff1dd"),
+        "_id": ObjectId("672f7c5b775ee9f4471ff1dd"),
         "displayId": "#1",
         # rest of the fields...
     },
     {
-        "id": ObjectId("674c726ca89aab38040cb964"),
+        "_id": ObjectId("674c726ca89aab38040cb964"),
         "displayId": "#2",
         # rest of the fields...
     },
 ]

Alternatively, if there's a reason to use "id" instead of "_id", ensure this is consistent across all fixtures.

todo/models/common/document.py (1)

1-7: Consider adding field descriptions for better API documentation

While the code is functionally correct, consider adding descriptions to fields using the description parameter. This improves auto-generated API documentation.

-    id: PyObjectId | None = Field(None, alias="_id")
+    id: PyObjectId | None = Field(None, alias="_id", description="MongoDB document ID")
todo/models/label.py (2)

6-15: Consider additional field validation for label attributes

The LabelModel is well-structured but lacks validation constraints on fields like name and color. Consider adding validation to ensure data integrity.

-    name: str
-    color: str
+    name: str = Field(..., min_length=1, max_length=50)
+    color: str = Field(..., pattern=r"^#[0-9A-Fa-f]{6}$")

6-15: Consider using snake_case for field names to align with Python conventions

While not critical, the model uses camelCase field names (isDeleted, createdAt, etc.) rather than Python's conventional snake_case. This creates a stylistic inconsistency with standard Python code.

If this is an intentional design choice to match external API conventions or MongoDB naming, please consider documenting this decision.

todo/tests/unit/models/test_label.py (2)

20-33: Fix typo in test method name

There's a spelling error in the test method name.

-    def test_lable_model_throws_error_when_missing_required_fields(self):
+    def test_label_model_throws_error_when_missing_required_fields(self):

20-33: Consider adding tests for validation of field formats

While you're testing for required fields, also consider testing validation of field formats (e.g., color format) and boundary conditions for a more comprehensive test suite.

Example test to add:

def test_label_model_validates_color_format(self):
    invalid_data = self.valid_data.copy()
    invalid_data["color"] = "invalid-color"  # Not a valid hex color
    
    with self.assertRaises(ValidationError) as context:
        LabelModel(**invalid_data)
    
    error = context.exception.errors()[0]
    self.assertEqual(error.get("type"), "pattern_mismatch")
    self.assertEqual(error.get("loc")[0], "color")
todo/models/task.py (4)

11-11: Unused database_manager instance

The database_manager is initialized but not used within this file. Consider removing it if not needed, or document its purpose if it's intended for future use.

-database_manager = DatabaseManager()
-

36-36: Consider default_factory for DeferredDetailsModel

For complex nested models, it's better to use default_factory instead of None.

-    deferredDetails: DeferredDetailsModel | None = None
+    deferredDetails: DeferredDetailsModel = Field(default_factory=DeferredDetailsModel)

27-42: Add field validation for critical task properties

Consider adding validation to ensure data integrity for critical fields.

-    title: str
+    title: str = Field(..., min_length=1, max_length=200)
-    displayId: str
+    displayId: str = Field(..., pattern=r"^TASK-\d+$") 

This helps prevent invalid data from being stored and provides better error messages.


26-26: Redundant id field definition

The id field is already defined in the base Document class, so redefining it here is redundant unless you're changing its configuration.

-    id: PyObjectId | None = Field(None, alias="_id")
todo/tests/unit/models/test_task.py (1)

21-36: Improve test method for missing required fields

The current test method removes all required fields at once and then tests for errors. This approach could mask issues if one field doesn't properly validate or if validation stops after the first error.

Consider testing each required field individually:

-    def test_task_model_throws_error_when_missing_required_fields(self):
-        incomplete_data = self.valid_task_data.copy()
-        required_fields = ["displayId", "title", "createdAt", "createdBy"]
-        for field_name in required_fields:
-            del incomplete_data[field_name]
-
-        with self.assertRaises(ValidationError) as context:
-            TaskModel(**incomplete_data)
-
-        missing_fields_count = 0
-        for error in context.exception.errors():
-            self.assertEqual(error.get("type"), "missing")
-            self.assertIn(error.get("loc")[0], required_fields)
-            missing_fields_count += 1
-        self.assertEqual(missing_fields_count, len(required_fields))
+    def test_task_model_throws_error_when_missing_required_fields(self):
+        required_fields = ["displayId", "title", "createdAt", "createdBy"]
+        for field_name in required_fields:
+            incomplete_data = self.valid_task_data.copy()
+            del incomplete_data[field_name]
+            
+            with self.assertRaises(ValidationError) as context:
+                TaskModel(**incomplete_data)
+                
+            self.assertEqual(len(context.exception.errors()), 1)
+            error = context.exception.errors()[0]
+            self.assertEqual(error.get("type"), "missing")
+            self.assertEqual(error.get("loc")[0], field_name)
todo/tests/unit/models/common/test_document.py (2)

28-30: Remove debugging print statement

The print statement in the exception handler appears to be debugging code that should be removed.

-            print(e)

23-31: Simplify test for valid collection_name

The try/except block adds unnecessary complexity since you're just testing that no exception is raised.

-    def test_subclass_with_valid_collection_name(self):
-        try:
-
-            class ValidDocument(Document):
-                collection_name: ClassVar[str] = "valid_collection"
-        except TypeError as e:
-            print(e)
-            self.fail("TypeError raised for valid Document subclass")
+    def test_subclass_with_valid_collection_name(self):
+        # This should not raise an exception
+        class ValidDocument(Document):
+            collection_name: ClassVar[str] = "valid_collection"
+        
+        # Verify the class was created successfully
+        self.assertEqual(ValidDocument.collection_name, "valid_collection")
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between b04c396 and f91c23f.

📒 Files selected for processing (15)
  • .coveragerc (1 hunks)
  • requirements.txt (2 hunks)
  • todo/constants/task.py (1 hunks)
  • todo/models/common/document.py (1 hunks)
  • todo/models/common/pyobjectid.py (1 hunks)
  • todo/models/label.py (1 hunks)
  • todo/models/task.py (1 hunks)
  • todo/tests/fixtures/label.py (1 hunks)
  • todo/tests/fixtures/task.py (1 hunks)
  • todo/tests/unit/models/__init__.py (1 hunks)
  • todo/tests/unit/models/common/__init__.py (1 hunks)
  • todo/tests/unit/models/common/test_document.py (1 hunks)
  • todo/tests/unit/models/common/test_pyobjectid.py (1 hunks)
  • todo/tests/unit/models/test_label.py (1 hunks)
  • todo/tests/unit/models/test_task.py (1 hunks)
🧰 Additional context used
🪛 Ruff (0.8.2)
todo/tests/fixtures/label.py

2-2: todo.models.label.LabelModel imported but unused

Remove unused import: todo.models.label.LabelModel

(F401)

todo/tests/fixtures/task.py

2-2: todo.models.task.TaskModel imported but unused

Remove unused import: todo.models.task.TaskModel

(F401)

todo/tests/unit/models/test_task.py

1-1: typing.List imported but unused

Remove unused import: typing.List

(F401)

🪛 GitHub Actions: Tests
todo/tests/fixtures/label.py

[error] 2-2: todo.models.label.LabelModel imported but unused. Remove unused import.

todo/tests/fixtures/task.py

[error] 2-2: todo.models.task.TaskModel imported but unused. Remove unused import.

todo/tests/unit/models/test_task.py

[error] 1-1: typing.List imported but unused. Remove unused import.

🔇 Additional comments (10)
requirements.txt (1)

1-1: Package dependencies have been appropriately added.

The addition of Pydantic and its required dependencies is well aligned with the PR objective of creating database models for tasks and labels. These packages provide robust data validation capabilities that will improve the reliability of the data models.

Also applies to: 13-14, 20-20

todo/tests/unit/models/__init__.py (1)

1-2: Good practice with explicit documentation.

The comment clearly explains the purpose of this empty __init__.py file, which is to enable Django's test discovery. This is good documentation practice.

todo/tests/unit/models/common/__init__.py (1)

1-2: Good practice with explicit documentation.

The comment clearly explains the purpose of this empty __init__.py file, which is to enable Django's test discovery. This is good documentation practice.

.coveragerc (1)

6-7: Coverage exclusion configuration looks good

Excluding test files from coverage reporting is a best practice as we want to measure coverage of application code, not test code. This change aligns well with the PR objective of adding database models and their tests.

todo/constants/task.py (1)

1-10: TaskStatus enum implementation is solid

The enumeration provides a clear and type-safe way to represent the different states a task can be in. This is a good foundation for the task model.

todo/tests/fixtures/label.py (1)

5-20: Label fixture data looks appropriate

The test data contains all the necessary fields for labels with realistic values. Good use of ObjectId for MongoDB compatibility.

todo/models/common/document.py (2)

9-15: Well-structured base Document class with good enforcement of collection_name

The Document class provides a solid foundation for MongoDB models with proper ID field aliasing and subclass validation. The __init_subclass__ method effectively enforces that all subclasses must define a collection_name.


17-20: Good Pydantic configuration for MongoDB compatibility

The Config class properly sets up JSON encoding for ObjectId and enables populate_by_name, which is important for alias handling.

todo/tests/unit/models/test_label.py (1)

14-19: Good test coverage for model instantiation with valid data

The test properly verifies default values and successful instantiation, which is important for ensuring the model behaves as expected.

todo/tests/unit/models/common/test_pyobjectid.py (1)

1-40: Well-structured and comprehensive test suite for PyObjectId!

Your test coverage for the PyObjectId class is thorough and well-implemented. You've covered validation of valid ObjectIds, handling of invalid ObjectIds, edge cases like None values, and proper integration with Pydantic models.

@iamitprakash iamitprakash merged commit ea76f8f into develop Mar 12, 2025
2 checks passed
@Achintya-Chatterjee Achintya-Chatterjee deleted the feat-get-todo-api-add-models branch March 12, 2025 19:16
@Achintya-Chatterjee Achintya-Chatterjee mentioned this pull request Mar 30, 2025
10 tasks
@coderabbitai coderabbitai bot mentioned this pull request Apr 14, 2025
10 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants

Comments