In a remarkable display of academic brilliance and entrepreneurial spirit, graduate students from MIT, Harvard, and Stanford have come together to form an AI/AR software company with the potential to change how we work, play, and interact with our world. OmniVoid is on the cutting edge of developing cutting-edge AI and XR technology that will revolutionize how businesses and organizations work and interact with the world around them.
OmniVoid is a tech company on a mission to fill the void with a central family of AI products and engineering of custom software solutions, fusing expertise from the fields of AI and XR (Augmented Reality / Virtual Reality). Their team of Ivy-league engineers is on a journey to unlock the hidden potential of their clients’ companies and brands by leveraging the latest advancements in AI and XR.
The rapid growth of AI is creating many opportunities for humanity, but it’s also creating some serious risks. Just like how viruses are researched in bio-safety labs and uranium is enriched in controlled environments, it’s important that the development of powerful AI systems is done under supervision and control. However, unlike research into viruses and nuclear contamination, there’s no single authority overseeing the development of AI, leaving it vulnerable to misuse.
As the need for transparency in AI increases, there’s been a surge of interest in Explainable Artificial Intelligence (XAI). Essentially, XAI techniques (“explainers”) aim to reveal the reasoning behind the decisions that an AI model makes, opening up its black box to see what it’s thinking. These methods can improve the transparency and persuasiveness of AI systems, as well as help ML developers debug and optimize their models.
There are several popular XAI libraries available, each with its own unique capabilities. But it’s challenging to switch between different explanation methods, and it can be difficult for data scientists or ML developers to know which library to choose when seeking a new method. In addition, the various XAI methods have different interfaces and functionality, making it hard to integrate them into existing ML codebases.
The omnivoid platform is designed to solve this problem by providing a unified interface for explaining any AI model, regardless of the framework or architecture. It allows users to select the explainers they want to use, specify the data (tabular, image, text) they want to explain, and launch a dashboard app built on Plotly Dash to visualize the results of the explanations. Moreover, the omnivoid platform provides an API for integrating explainers into any existing ML framework, including Python, TensorFlow, PyTorch, and scikit-learn.
As the need for transparency in AI increases, it’s important that there’s a way to quickly and easily assess the quality of the results produced by different explainers. The omnivoid platform enables this by offering an objective evaluation system that allows users to compare the resulting visualizations, making it easier to identify and select the most suitable explainer. This system is currently being used by researchers and ML engineers worldwide to evaluate the quality of their own XAI results.omnivoid ai