Add database models for get tasks API#9
Conversation
This is not required
iamitprakash
left a comment
There was a problem hiding this comment.
Tests / build (pull_request)Failing after 15s
Its failing because there is a PR before this that will fix the failing CI |
WalkthroughThis pull request updates the coverage configuration and dependency list while introducing several new modules for task and label management. It adds enumerations for task statuses and priorities, a base document class with validation (including a custom ObjectId wrapper), and associated models for labels and tasks. Additionally, new fixtures and comprehensive unit tests have been provided to validate the behavior of these models, including error handling and aliasing features. Changes
Sequence Diagram(s)sequenceDiagram
actor Client as "Client"
participant Subclass as "Document Subclass"
participant Base as "Document (Base)"
Client->>Subclass: Instantiate subclass
Subclass->>Base: __init_subclass__() check for collection_name
alt Missing/Invalid collection_name
Base-->>Client: Raise TypeError
else Valid collection_name
Base-->>Client: Instance created successfully
end
sequenceDiagram
actor Input as "Input Value"
participant Validator as "PyObjectId.validate"
Input->>Validator: Provide value
Validator->>Validator: Check if value is None
alt Value is not None
Validator->>Validator: Validate using ObjectId.is_valid
alt Valid ObjectId
Validator-->>Input: Return ObjectId instance
else Invalid ObjectId
Validator-->>Input: Raise ValueError
end
else
Validator-->>Input: Return None
end
Poem
📜 Recent review detailsConfiguration used: CodeRabbit UI 📒 Files selected for processing (3)
🚧 Files skipped from review as they are similar to previous changes (3)
🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Actionable comments posted: 4
🧹 Nitpick comments (15)
todo/models/common/pyobjectid.py (1)
1-16: Consider simplifying the validation logic and adding docstrings.The
PyObjectIdclass provides an essential wrapper for MongoDB'sObjectIdto work with Pydantic, but it could benefit from some improvements:
- The validation logic contains a redundant check on line 13 (
value is not None) since you've already handled theNonecase separately.- The class and methods lack docstrings explaining their purpose and usage.
from bson import ObjectId class PyObjectId(ObjectId): + """Custom wrapper for MongoDB ObjectId to work with Pydantic models.""" @classmethod def __get_validators__(cls): + """Return a list of validator methods for Pydantic.""" yield cls.validate @classmethod def validate(cls, value, field=None): + """Validate and convert input to ObjectId. + + Args: + value: The value to validate + field: The Pydantic field (unused) + + Returns: + ObjectId instance or None + + Raises: + ValueError: If the value is not a valid ObjectId + """ if value is None: return None - if value is not None and ObjectId.is_valid(value): + if ObjectId.is_valid(value): return ObjectId(value) raise ValueError(f"Invalid ObjectId: {value}")todo/constants/task.py (1)
12-16: TaskPriority enum implementation is appropriateUsing numeric values for priorities is a good approach, making it easy to sort tasks by priority level. The implementation is clean and straightforward.
Consider adding docstrings to both enum classes to better document their purpose and usage, especially for other developers who may work with this code in the future:
class TaskStatus(Enum): + """Represents the possible states of a task in the application.""" TODO = "TODO" IN_PROGRESS = "IN_PROGRESS" DEFERRED = "DEFERRED" BLOCKED = "BLOCKED" DONE = "DONE" class TaskPriority(Enum): + """Represents the priority levels for tasks, with lower numbers indicating higher priority.""" HIGH = 1 MEDIUM = 2 LOW = 3todo/tests/fixtures/task.py (1)
6-37: Test fixture data looks comprehensiveThe fixture provides a good variety of test data with different statuses and priorities. This will be valuable for thorough testing of the task model.
Consider using consistent field naming conventions with MongoDB. In the first task object, you're using
"id"while MongoDB typically uses"_id"(which is used in your label fixtures). This inconsistency might cause confusion:tasks_db_data = [ { - "id": ObjectId("672f7c5b775ee9f4471ff1dd"), + "_id": ObjectId("672f7c5b775ee9f4471ff1dd"), "displayId": "#1", # rest of the fields... }, { - "id": ObjectId("674c726ca89aab38040cb964"), + "_id": ObjectId("674c726ca89aab38040cb964"), "displayId": "#2", # rest of the fields... }, ]Alternatively, if there's a reason to use
"id"instead of"_id", ensure this is consistent across all fixtures.todo/models/common/document.py (1)
1-7: Consider adding field descriptions for better API documentationWhile the code is functionally correct, consider adding descriptions to fields using the description parameter. This improves auto-generated API documentation.
- id: PyObjectId | None = Field(None, alias="_id") + id: PyObjectId | None = Field(None, alias="_id", description="MongoDB document ID")todo/models/label.py (2)
6-15: Consider additional field validation for label attributesThe LabelModel is well-structured but lacks validation constraints on fields like name and color. Consider adding validation to ensure data integrity.
- name: str - color: str + name: str = Field(..., min_length=1, max_length=50) + color: str = Field(..., pattern=r"^#[0-9A-Fa-f]{6}$")
6-15: Consider using snake_case for field names to align with Python conventionsWhile not critical, the model uses camelCase field names (isDeleted, createdAt, etc.) rather than Python's conventional snake_case. This creates a stylistic inconsistency with standard Python code.
If this is an intentional design choice to match external API conventions or MongoDB naming, please consider documenting this decision.
todo/tests/unit/models/test_label.py (2)
20-33: Fix typo in test method nameThere's a spelling error in the test method name.
- def test_lable_model_throws_error_when_missing_required_fields(self): + def test_label_model_throws_error_when_missing_required_fields(self):
20-33: Consider adding tests for validation of field formatsWhile you're testing for required fields, also consider testing validation of field formats (e.g., color format) and boundary conditions for a more comprehensive test suite.
Example test to add:
def test_label_model_validates_color_format(self): invalid_data = self.valid_data.copy() invalid_data["color"] = "invalid-color" # Not a valid hex color with self.assertRaises(ValidationError) as context: LabelModel(**invalid_data) error = context.exception.errors()[0] self.assertEqual(error.get("type"), "pattern_mismatch") self.assertEqual(error.get("loc")[0], "color")todo/models/task.py (4)
11-11: Unused database_manager instanceThe database_manager is initialized but not used within this file. Consider removing it if not needed, or document its purpose if it's intended for future use.
-database_manager = DatabaseManager() -
36-36: Consider default_factory for DeferredDetailsModelFor complex nested models, it's better to use default_factory instead of None.
- deferredDetails: DeferredDetailsModel | None = None + deferredDetails: DeferredDetailsModel = Field(default_factory=DeferredDetailsModel)
27-42: Add field validation for critical task propertiesConsider adding validation to ensure data integrity for critical fields.
- title: str + title: str = Field(..., min_length=1, max_length=200) - displayId: str + displayId: str = Field(..., pattern=r"^TASK-\d+$")This helps prevent invalid data from being stored and provides better error messages.
26-26: Redundant id field definitionThe
idfield is already defined in the base Document class, so redefining it here is redundant unless you're changing its configuration.- id: PyObjectId | None = Field(None, alias="_id")todo/tests/unit/models/test_task.py (1)
21-36: Improve test method for missing required fieldsThe current test method removes all required fields at once and then tests for errors. This approach could mask issues if one field doesn't properly validate or if validation stops after the first error.
Consider testing each required field individually:
- def test_task_model_throws_error_when_missing_required_fields(self): - incomplete_data = self.valid_task_data.copy() - required_fields = ["displayId", "title", "createdAt", "createdBy"] - for field_name in required_fields: - del incomplete_data[field_name] - - with self.assertRaises(ValidationError) as context: - TaskModel(**incomplete_data) - - missing_fields_count = 0 - for error in context.exception.errors(): - self.assertEqual(error.get("type"), "missing") - self.assertIn(error.get("loc")[0], required_fields) - missing_fields_count += 1 - self.assertEqual(missing_fields_count, len(required_fields)) + def test_task_model_throws_error_when_missing_required_fields(self): + required_fields = ["displayId", "title", "createdAt", "createdBy"] + for field_name in required_fields: + incomplete_data = self.valid_task_data.copy() + del incomplete_data[field_name] + + with self.assertRaises(ValidationError) as context: + TaskModel(**incomplete_data) + + self.assertEqual(len(context.exception.errors()), 1) + error = context.exception.errors()[0] + self.assertEqual(error.get("type"), "missing") + self.assertEqual(error.get("loc")[0], field_name)todo/tests/unit/models/common/test_document.py (2)
28-30: Remove debugging print statementThe print statement in the exception handler appears to be debugging code that should be removed.
- print(e)
23-31: Simplify test for valid collection_nameThe try/except block adds unnecessary complexity since you're just testing that no exception is raised.
- def test_subclass_with_valid_collection_name(self): - try: - - class ValidDocument(Document): - collection_name: ClassVar[str] = "valid_collection" - except TypeError as e: - print(e) - self.fail("TypeError raised for valid Document subclass") + def test_subclass_with_valid_collection_name(self): + # This should not raise an exception + class ValidDocument(Document): + collection_name: ClassVar[str] = "valid_collection" + + # Verify the class was created successfully + self.assertEqual(ValidDocument.collection_name, "valid_collection")
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (15)
.coveragerc(1 hunks)requirements.txt(2 hunks)todo/constants/task.py(1 hunks)todo/models/common/document.py(1 hunks)todo/models/common/pyobjectid.py(1 hunks)todo/models/label.py(1 hunks)todo/models/task.py(1 hunks)todo/tests/fixtures/label.py(1 hunks)todo/tests/fixtures/task.py(1 hunks)todo/tests/unit/models/__init__.py(1 hunks)todo/tests/unit/models/common/__init__.py(1 hunks)todo/tests/unit/models/common/test_document.py(1 hunks)todo/tests/unit/models/common/test_pyobjectid.py(1 hunks)todo/tests/unit/models/test_label.py(1 hunks)todo/tests/unit/models/test_task.py(1 hunks)
🧰 Additional context used
🪛 Ruff (0.8.2)
todo/tests/fixtures/label.py
2-2: todo.models.label.LabelModel imported but unused
Remove unused import: todo.models.label.LabelModel
(F401)
todo/tests/fixtures/task.py
2-2: todo.models.task.TaskModel imported but unused
Remove unused import: todo.models.task.TaskModel
(F401)
todo/tests/unit/models/test_task.py
1-1: typing.List imported but unused
Remove unused import: typing.List
(F401)
🪛 GitHub Actions: Tests
todo/tests/fixtures/label.py
[error] 2-2: todo.models.label.LabelModel imported but unused. Remove unused import.
todo/tests/fixtures/task.py
[error] 2-2: todo.models.task.TaskModel imported but unused. Remove unused import.
todo/tests/unit/models/test_task.py
[error] 1-1: typing.List imported but unused. Remove unused import.
🔇 Additional comments (10)
requirements.txt (1)
1-1: Package dependencies have been appropriately added.The addition of Pydantic and its required dependencies is well aligned with the PR objective of creating database models for tasks and labels. These packages provide robust data validation capabilities that will improve the reliability of the data models.
Also applies to: 13-14, 20-20
todo/tests/unit/models/__init__.py (1)
1-2: Good practice with explicit documentation.The comment clearly explains the purpose of this empty
__init__.pyfile, which is to enable Django's test discovery. This is good documentation practice.todo/tests/unit/models/common/__init__.py (1)
1-2: Good practice with explicit documentation.The comment clearly explains the purpose of this empty
__init__.pyfile, which is to enable Django's test discovery. This is good documentation practice..coveragerc (1)
6-7: Coverage exclusion configuration looks goodExcluding test files from coverage reporting is a best practice as we want to measure coverage of application code, not test code. This change aligns well with the PR objective of adding database models and their tests.
todo/constants/task.py (1)
1-10: TaskStatus enum implementation is solidThe enumeration provides a clear and type-safe way to represent the different states a task can be in. This is a good foundation for the task model.
todo/tests/fixtures/label.py (1)
5-20: Label fixture data looks appropriateThe test data contains all the necessary fields for labels with realistic values. Good use of
ObjectIdfor MongoDB compatibility.todo/models/common/document.py (2)
9-15: Well-structured base Document class with good enforcement of collection_nameThe Document class provides a solid foundation for MongoDB models with proper ID field aliasing and subclass validation. The
__init_subclass__method effectively enforces that all subclasses must define a collection_name.
17-20: Good Pydantic configuration for MongoDB compatibilityThe Config class properly sets up JSON encoding for ObjectId and enables populate_by_name, which is important for alias handling.
todo/tests/unit/models/test_label.py (1)
14-19: Good test coverage for model instantiation with valid dataThe test properly verifies default values and successful instantiation, which is important for ensuring the model behaves as expected.
todo/tests/unit/models/common/test_pyobjectid.py (1)
1-40: Well-structured and comprehensive test suite for PyObjectId!Your test coverage for the PyObjectId class is thorough and well-implemented. You've covered validation of valid ObjectIds, handling of invalid ObjectIds, edge cases like None values, and proper integration with Pydantic models.
Date: 20 Dec 2024
Developer Name: @Achintya-Chatterjee
Issue Ticket Number
Description
Documentation Updated?
Under Feature Flag
Database Changes
Breaking Changes
Development Tested?
Screenshots
Screenshot 1
Test Coverage
Coverage report
Additional Notes
The CI is failing right now because there an issue in the workflow. I have raised a PR to fix this. Once it's merged the CI should pass
Summary by CodeRabbit
New Features
Chores