‘Project Rumi’: Meet Microsoft’s ‘Gorilla AI’

Monitor News Desk

Microsoft’s commitment to driving the frontiers of artificial intelligence (AI) is prominently showcased in their recent addition: the Gorilla AI model. Funded by the tech giant, this AI breakthrough adds to a series of remarkable models that have been nurtured and invested in by Microsoft in recent times.

Gorilla AI joins a league of impressive AI creations that have garnered attention and intrigue. Notably, Microsoft has been at the forefront of fostering innovation in the AI domain, evidenced by their support for a diverse range of models tailored to various applications.

One notable instance is the Orca 13B AI language model, an open-source initiative that empowers individuals and organizations to craft their own AI models. The Redmond-based tech titan’s dedication to democratizing AI innovation is further underscored by their investments in Kosmos-2, a model that offers original perspectives on space visualization and data interpretation.

Venturing into language understanding, Microsoft’s collaboration with Meta has led to the inception of Llama 2, touted as one of the largest open-source LLMs globally. This collaborative endeavor facilitates the development of customized AI solutions, extending the benefits of advanced AI capabilities to a broader audience.

The AI parade doesn’t stop here. Microsoft’s portfolio encompasses a diverse array of models, each tailored to specific needs. Phi-1, CoDi, and DeepRapper – an AI model for music generation – are just a glimpse into the company’s commitment to pushing the boundaries of AI’s potential.

The advancements culminate in projects like Project Rumi, marking the company’s relentless drive towards technological progress. These AI initiatives not only boast significant features and improvements but also represent Microsoft’s overarching goal of making AI accessible and transformative across industries.

The authors create a substantial corpus of APIs, known as APIBench, by gathering machine learning APIs from major model hubs like TorchHub, TensorHub, and HuggingFace. Using self-instruction, they generate pairs of instructions and corresponding APIs. 

The fine-tuning process involves converting the data into a user-agent chat-style conversation format and performing standard instruction fine-tuning on the base LLaMA-7B model.

image 1
'Project Rumi': Meet Microsoft’s 'Gorilla AI' 3
Share This Article
Leave a comment