I inspire teams to deliver exceptional business value through engineering best practices and seamless collaboration. With extensive experience leading and structuring high-performance teams, I cultivate strategic partnerships and oversea the full product lifecycle to ensure innovation and efficiency. My expertise lies in transforming teams into quality powerhouses by implementing and evangelizing test automation across all layers of the application architecture - Unit, Integration, API and UI testing - maximizing reliability and scalability.
You can also find me on Linked In.
I am creating examples of Agentic AI Patterns presented by Mark Kashef in his YouTube video "Master ALL 20 Agentic AI Design Patterns [Complete Course]": https://github.com/JimHinson/agentic-ai-patterns
- Developed SQA roadmap, including objectives, scope, priorities, standards and policies, establishing QA KPIs that improved project alignment with cross-functional teams.
- Achieved a 75% reduction in hot patch delivery time by implementing sanity test automation.
- Cut legacy test automation failures by 50%, reducing sprint hardening time by 20%.
- Led a team of 20 domestic and international engineers to improve quality through smoke test automation, introducing team ownership of testing, peer test reviews, and test automation code reviews.
- Idempotent: The test environment should be unchanged once the test complete
- Independent: The test should run on its own
- Autonomous: The test should not affect any other test
- Use control IDs or other unique locators for UI tests.
- Work closely with development to keep tests in sync with the code.
- Keep tests in the same repository as application code when possible to keep them in sync with the application code.
- Push tests down the triangle when possible. Generally speaking, this list is in order from least to most reliable:
a. UI/E2E tests
b. API tests
c. Contract tests
d. Unit tests - Maintain clear logging and reporting for quick failure analysis.
- Never ship code with broken tests.
- Use data driven tests to cover a broader set of scenarios with test code.
- Use a modular design which supports easy re-use of code.
- Keep your test code separate from action code, and separate from app interaction code (UI code). One nice pattern: Page Objects, Action Objects, Test Objects
Suggested priority order:
- Critical functionality.
- Commonly used functionality.
- High Risk functionality.
- High Visibility functionality.
- Bug concentrations.
- Time consuming tests.
- Data driven tests.
- My current projects include:
-- Quality Leadership: Leading my organization to deliver business value faster through improved quality.
-- Shared testing delivers higher quality earlier in the SDLC. This works best when Quality Engineering maintains ownership of Quality, sharing ownership of testing across the team. -- Building quality from the outset eliminates context switching and rework. - What does this mean for you?
-- Higher performing teams who find greater fulfillment in their work. - How do i do this?
-- Designing a Definition of Done with the team.
--- When starting a story, we use a Three Amigos meeting, involving Coder, Tester and Product.
--- We outline our story implementation.
--- We decide how we will test each feature.
--- We walk away with an outline of what code will look like, what a test plan will look like, and confidence that we're on target to satisify the needs of the business.
-- During the coding phase, we outline as much, or as little, of a test plan as we need.


