1. The system can be gamed. When the impact factor was first developed H. J. M. Hanley at the National Bureau of Standard somewhat jokingly suggested that future scientist should “cite yourself as often as possible; insist that your work be cited in all articles that you review; and automatically pass articles that already contain a sufficient number of citations to you.”
2. Citations appear in journals for a number of reasons. There are many reasons a citation may appear in a journal. Sometimes citations appear because the article is being negatively reviewed. Some citations appear in the editorials of journals, a factor which is known, for example, to have inflated impact factors of trauma and orthopedic journals.
3. Does not reflect all academic output. Datasets, models, study methodology, best practices methodology, pedagogy, and code are all valid outputs of academic work, but are not counted or necessarily weighed in the same fashion as articles are.
4. Interdisciplinary work harder to measure. Also, as authors work across disciplines, it becomes harder to work with impact factor and the like since they are usually focused on comparisons within a discipline.
"Tyranny of metrics” has driven the rise of “predatory” publishing.
Unfortunately there is no consensus among scholars on how to identify and classify journals as predatory or not. Past efforts to create blacklists have run into frequent questions regarding criteria as have efforts to create whitelists.