Rating Methodology
Every course we review is scored against the same five criteria. Rankings are based on the weighted total, not on affiliate payouts or instructor popularity.
The five criteria
1. Teaching quality (25%). Clarity of explanation, quality of production, instructor's command of the subject, and whether the course actually teaches the material rather than just presenting it.
2. Hands-on depth (25%). How much real practice you get — assignments, projects, labs, code you write yourself — versus passive video watching.
3. Community and support (15%). Quality of forums, availability of TAs or instructor Q&A, and whether stuck learners actually get unstuck.
4. Content freshness (20%). How recently the course was updated, whether it reflects current tools and techniques, and how quickly the team responds to field changes.
5. Value for money (15%). Price relative to what you learn, with free and low-cost options scored on their own merits rather than penalized for being cheap.
Badges
Our badges — Top Pick, Editor's Pick, Best Value, Best Free, Solid Choice — are applied at the article level based on which courses stand out on specific dimensions, not on aggregate score alone. A course can win Best Value without being the highest-scored course in the article.
What we won't do
- We won't include a course in a ranked list just because it pays well.
- We won't remove a negative review on request from an instructor or platform.
- We won't use AI-generated reviews without human verification against our criteria.
Re-evaluation cadence
We re-score every ranked course at least every six months. Faster-moving topics (LLMs, agents, prompt engineering) are re-checked quarterly or when a major model release changes the field.