GitHub Trending Projects Dataset - Known Issues & Limitations
Dataset Overview
- Total Projects: 423,098
- Date Range: 2013-08-21 to 2025-11-30
- Unique Repositories: 14,500
- Success Rate: 89.8% (17,127/19,064 URLs)
π¨ Major Issues
1. Missing Star/Fork Count Data (2013-2019)
Severity: High Affected: 25,150 entries (5.9%)
Problem:
- 100% of 2013-2019 data lacks star/fork counts
- Only data from 2020+ has star/fork information
- This is due to HTML structure differences in older Wayback Machine snapshots
Impact:
- Cannot compare popularity metrics for pre-2020 projects
- Monthly rankings rely solely on trending score for 2013-2019
- Incomplete analysis for historical trends
Affected Years:
2013: 100% missing (150 entries)
2014: 100% missing (125 entries)
2015: 100% missing (325 entries)
2016: 100% missing (1,200 entries)
2017: 100% missing (1,550 entries)
2018: 100% missing (4,324 entries)
2019: 100% missing (17,475 entries)
2020+: 0% missing (397,949 entries)
Recommendation:
- Use weighted trending score only for historical analysis
- Clearly document this limitation when presenting data
- Consider scraping current star counts from GitHub API for historical projects
2. Uneven Temporal Distribution
Severity: High Affected: All data
Problem:
- Snapshot frequency varies dramatically: 1 to 31 snapshots per month
- Some months have 1 snapshot (25 projects), others have 31 (15,763 projects)
- 31x variance in data density across time periods
Examples:
Sparse months (1 snapshot):
- 2015-04: 25 projects
- 2015-06: 25 projects
- 2016-11: 25 projects
Dense months (31 snapshots):
- 2019-05: 4,650 projects
- 2020-01: 17,446 projects
- 2020-05: 15,763 projects
Impact:
- Over-representation of 2019-2020 period
- Monthly scores favor periods with more snapshots
- Difficult to compare across time periods fairly
- Projects appearing in dense months get inflated scores
Recommendation:
- Normalize scores by dividing by number of snapshots per month
- Weight monthly rankings by data density
- Consider resampling to create uniform temporal distribution
3. Inconsistent Star/Fork Count Timing
Severity: Medium Affected: All entries with star counts (67.8%)
Problem:
- Star/fork counts are "maximum ever recorded" across all snapshots
- A 2015 project's star count might be from 2025
- A 2025 project's star count is from 2025
- Not temporally consistent or comparable
Example Issues:
Project A (trending 2015):
- Trending date: 2015-03-15
- Star count: 100,000 (scraped 2025)
- Had 10 years to accumulate stars
Project B (trending 2025):
- Trending date: 2025-03-15
- Star count: 20,000 (scraped 2025)
- Had 0 years to accumulate stars
Issue: Can't fairly compare popularity
Impact:
- Older projects appear more popular (survival bias)
- Can't analyze "stars at time of trending"
- Misleading for popularity comparisons across eras
Recommendation:
- Document this clearly: "Stars represent current popularity, not popularity when trending"
- Consider using trending score only for cross-era comparisons
- For accurate historical analysis, would need to scrape stars from archived snapshots
4. Multiple Appearances Bias
Severity: Medium Affected: Scoring methodology
Problem:
- Some projects appear 1,900+ times, others appear once
- Scoring favors projects that "stick around" on trending
- Brief but intense viral projects get undervalued
Distribution:
1 appearance: 1,129 projects (7.8%)
2-5 appearances: 1,852 projects (12.8%)
6-10 appearances: 3,732 projects (25.7%)
11-50 appearances: 6,005 projects (41.4%)
50+ appearances: 1,782 projects (12.3%)
Most Over-Represented:
1. jwasham/coding-interview-university: 1,948 appearances
2. TheAlgorithms/Python: 1,891 appearances
3. donnemartin/system-design-primer: 1,865 appearances
4. public-apis/public-apis: 1,830 appearances
5. EbookFoundation/free-programming-books: 1,737 appearances
Impact:
- "Evergreen" educational repos dominate rankings
- Viral new projects undervalued if they trend briefly
- Doesn't distinguish between sustained vs. brief trending
Recommendation:
- Create separate rankings: "Most Consistent" vs "Peak Trending"
- Add "peak rank achieved" metric
- Consider decay function for repeated appearances
5. Linear Scoring Assumption
Severity: Low-Medium Affected: Monthly rankings
Problem:
- Current scoring: Rank 1 = 25 pts, Rank 2 = 24 pts (linear)
- Assumes rank 1β2 has same value as rank 24β25
- In reality, top positions have exponentially more visibility
Distribution:
Rank 1-5: 90,280 entries (21.3%)
Rank 6-10: 90,178 entries (21.3%)
Rank 11-15: 87,522 entries (20.7%)
Rank 16-20: 79,516 entries (18.8%)
Rank 21-25: 75,602 entries (17.9%)
Impact:
- Undervalues #1 position
- May not reflect actual visibility/impact differences
- Alternative exponential scoring might be more accurate
Recommendation:
- Consider exponential scoring: 2^(25-rank)
- Or logarithmic: log(26-rank)
- A/B test different scoring functions against actual star growth
6. Failed Scrapes & Missing Data
Severity: Medium Affected: 1,937 URLs (10.2%)
Problem:
- SSL/TLS incompatibility with 2014-2019 Wayback snapshots
- Incomplete Wayback Machine captures
- Connection timeouts and 503 errors
Impact:
- Gaps in temporal coverage
- Some dates completely missing
- Potential systematic bias if certain types of snapshots fail more
Affected Periods:
2014-10-01 to 2014-12-21: Many failures
2016-02-24 to 2016-03-11: Several failures
2019-06-12 to 2019-12-31: Heavy failures (mid-2019 SSL issues)
2024-10-28: 3 failures (503 errors)
Recommendation:
- Retry failed URLs periodically (Wayback Machine availability changes)
- Use GitHub API to fill gaps where possible
- Document missing date ranges in analysis
7. Rank Distribution Skew
Severity: Low Affected: Lower-ranked entries
Problem:
- Fewer entries at ranks 21-25 (75,602) vs ranks 1-5 (90,280)
- Suggests some snapshots had <25 projects
- Or extraction issues with lower-ranked items
Impact:
- Scoring may overvalue top ranks due to sample size
- Statistical significance varies by rank position
Recommendation:
- Filter analysis to top 20 for consistency
- Or normalize scores by rank availability
π Dataset Quality Metrics
Completeness
β
Temporal Coverage: 89.8% (128/142 months have data)
β Star/Fork Data: 67.8% complete (missing all pre-2020)
β
Rank Data: 100% complete
β
Repository Names: 100% complete
Consistency
β Snapshot Frequency: Highly inconsistent (1-31 per month)
β Star Count Timing: Not temporally aligned
β οΈ Scoring Methodology: Linear assumption (debatable)
Reliability
β
Scraping Success: 89.8%
β Failed URLs: 10.2% (recoverable with retry)
β
Data Validation: No duplicate entries detected
π§ Recommended Fixes
High Priority
- Add normalized scores that account for snapshot frequency
- Document star count timing issue prominently in analysis
- Create separate pre-2020 and post-2020 analyses due to missing data
- Retry failed URLs to improve coverage
Medium Priority
- Test exponential scoring vs linear for better accuracy
- Add "peak rank" metric to identify viral projects
- Separate "evergreen" vs "viral" rankings
- Scrape current GitHub API data to fill historical gaps
Low Priority
- Create confidence intervals for sparse months
- Add data quality flags per entry
- Document GitHub trending algorithm changes over time
π Usage Guidelines
β Good Uses
- Identifying trending patterns in 2020-2025 (complete data)
- Analyzing trending frequency/consistency
- Discovering historically significant projects
- Comparative analysis within same time period
β οΈ Use With Caution
- Cross-era popularity comparisons (star count issues)
- Monthly comparisons with very different snapshot counts
- Absolute popularity rankings (use GitHub API instead)
- Historical analysis pre-2020 (missing star/fork data)
β Not Recommended
- Claiming "most popular project ever" (timing issues)
- Direct star count comparisons across decades
- Precise month-to-month trending velocity analysis (uneven sampling)
- Analysis of projects that trended <5 times (insufficient data)
π Data Quality by Year
| Year | Projects | Star Data | Snapshots | Quality Grade |
|---|---|---|---|---|
| 2013 | 150 | 0% | Low | D (Minimal) |
| 2014 | 125 | 0% | Low | D (Minimal) |
| 2015 | 325 | 0% | Low | D (Minimal) |
| 2016 | 1,200 | 0% | Low | D (Minimal) |
| 2017 | 1,550 | 0% | Low | D (Minimal) |
| 2018 | 4,324 | 0% | Medium | C- (Limited) |
| 2019 | 17,475 | 0% | High | C+ (Incomplete) |
| 2020 | 108,672 | 100% | High | A- (Excellent) |
| 2021 | 70,006 | 100% | High | A- (Excellent) |
| 2022 | 74,915 | 100% | High | A- (Excellent) |
| 2023 | 73,674 | 100% | High | A- (Excellent) |
| 2024 | 46,538 | 100% | High | A- (Excellent) |
| 2025 | 24,144 | 100% | Medium | A- (Excellent) |
π― Conclusion
This dataset is excellent for 2020-2025 analysis but has significant limitations for historical (2013-2019) analysis. The primary issues are:
- Missing star/fork data pre-2020 (structural limitation)
- Uneven temporal distribution (Wayback Machine artifact)
- Star count timing inconsistency (methodology issue)
These issues are documentable and manageable but should be clearly communicated in any analysis or visualization using this data.
Overall Grade: B+
- A+ for recent data (2020-2025)
- C+ for historical data (2013-2019)
- Excellent for trending patterns, limited for absolute popularity metrics