Four U.S. government agencies have issued an advisory regarding bias in the use of artificial intelligence (AI) and other automated systems, the scope of which includes software products that “make decisions.” The four agencies have pledged to use their enforcement powers to “protect individuals’ rights regardless of whether legal violations occur through traditional means or advanced technologies,” all of which sends a signal to developers of medical software that the FDA is not the only federal government agency that will be looking over their shoulder to evaluate the risk of bias in those algorithms.