AI model marketplaces come in various shapes and sizes. Outside of the conventional marketplace approach, large Internet companies have made open-source contributions that include not just libraries but also models. BERT is an example of a model distributed online by Google that has gained popularity in recent times amongst industry practitioners and researchers alike.
The distribution pre-trained models in NLP by
huggingface, efforts like the model zoo and the Open Neural Network Exchange make pre-trained models available and easily accessible to a wide audience.
huggingface in particular has developed libraries that have gained significant traction in the NLP community for their ease of use and the wide variety of models they have made available, often accessible with just a few lines of code.
An example of a traditional marketplace for AI models is the AWS Model Marketplace. An extensive survey by M. Xiu, Z. M. J. Jiang and B. Adams, “An Exploratory Study on Machine-Learning Model Stores,” in IEEE Software explores features of model marketplaces while comparing them with popular app stores. The comparison is informative and identifies how model stores are organized along;
- Product Information
- Technical Documentation
- Product Submission & Store Review
- Legal Information
The table below is from this survey paper, accessible on arxiv.
With AI marketplaces proliferating and pre-trained models being widely adopted, it has become imperative for practitioners to explore any underlying biases that models might be prone to. Many consumers of models may overlook model antecedents much the same way many consumers don’t rigorously investigate apps that they install on their smartphones. While this may sound like an unforgiving indictment of AI practitioners, it isn’t unfathomable to imagine that the overlooking of model bias does exist and many models are often used in a plug and play manner without much investigation of their origins.
A recent pre-print, Fairness in Deep Learning:A Computational Perspective describes many aspects of fairness in AI models and develops a framework with which to measure and mitigate bias. Some examples of model bias are represented in the table below referenced from the pre-print.
A comprehensive collection of resources related to fairness in AI can be found here. It is a matter of time before marketplaces adopt fairness testing infrastructure as a gate for models to pass before they can be released or distributed.
Another very important aspect that practitioners need to care about as they consume models from marketplaces is model interpretability. Model interpretability, especially in the context of deep neural networks is a critical piece in the deployment and usage of AI models. As models assume more complexity in structure and proliferate our day to day lives, knowing what levers affect specific individual predictions can go a long way in interpreting model decisions. Model false positives may sound clinical and dry but their manifestation in the real world can upend the lives of individuals who are at the receiving end of AI errors. Such situations may demand looking under the hood of black-box models to discern their inner mechanisms.
The diagram below is from a recent pre-print, Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI, and describes the complexity involved with the explainability of AI models.The use of AI models, wether consumed from a marketplace or developed in-house, requires practitioners to be considerate of a large number of aspects. The figure below from the same paper referenced above puts this in perspective under the rubric of Responsible AI.
The use of AI models that are distributed online pose many challenges but also offer a large number of advantages. Models distributed by large organizations are often difficult to develop in-house without expending a significant number of resources, and is often financially prohibitive. The use of AI is only increasing and practitioners are responsible for developing models that have passed through critical checks and balances before they are allowed to operate in production. While it may have become increasingly easy to train complex models given the sophistication of libraries and tools, there is a lot of resources available to ease the adoption of Responsible AI practices. While it was data cleaning and feature engineering that consumed a large portion of the model lifecycle until many years ago, it is increasingly becoming obvious that the bulk of the model lifecycle will involve the necessary tasks that need to be performed under the umbrella of responsible and sustainable AI.