From facial recognition systems to financial credit, cancer, crime predictions; racial bias is very prevalent in the ecosystem of AI/ML training, an unfortunate but direct reflection of the current society with its racial inequalities. But this disparity is nurtured by the lack of cognitive diversity in the technology sector. Solving for one requires solving for the other as well. The prevailing assumption of Bias in ML pipeline according to most ML experts is that it starts at the data collection stage. Google ML Fairness Indicators Tool kit is based on this simplistic yet untrue assumption.