Description
I've recently added type annotations to a large library, and have been checking my code using both mypy and pyright. While doing so, I noticed many differences between mypy and pyright in the types they choose to infer. Each type checker has a justification for its choices, but as a user this situation is frustrating, because I rely a lot on inference. This leads to a situation where I frequently had to think about "what pyright would do" and "what mypy would do", and scour their issue trackers to understand what's going on - what's a bug and what's a "feature".
I totally understand that each type checker has been developed independently and influenced by different needs and design choices. I also understand that the ecosystem is very much in flux. I have seen authors of type checkers justify their choices - and rightfully so. Nonetheless I think it would be a big benefit to the community to specify type inference rules more fully (PEP?). If typing is seen as part of the Python language (in various PEPs), and type inference is seen as a feature of typing, then that feature should behave consistently.
I assume there would be much work to define inference rules and reach an agreement that works for all type checkers. Also it would probably require considerable work to implement the necessary changes. However, I believe in the long run it's beneficial. Especially since as time goes by, more backwards compatibility concerns will just pile up.
Areas where I have noticed considerable differences include redefinitions (mypy's allow_redefinition only partially consistent with pyright default), Literals, union vs join, and overloads. Maybe others I can't recall.
I'd like to hear what others think about this topic.
Activity