Microsoft's New Local AI Model, Copilot: An Affordable and Secure Alternative to Large AI Systems

Redmond, Washington United States of America
Copilot is expected to be more affordable and secure than larger AI systems like GPT-4.
Industries like automotive and healthcare are expected to benefit from Copilot's chatbot services and quick diagnosis capabilities.
Microsoft is developing a new local AI model called Copilot.
Microsoft's focus on smaller AI models could give it an edge over competitors in the market.
The local version of Copilot can run on devices such as PCs and smartphones, making it accessible to a wider audience.
Microsoft's New Local AI Model, Copilot: An Affordable and Secure Alternative to Large AI Systems

Microsoft, the tech giant, is reportedly working on introducing a new local version of its AI model, Copilot. This comes after Microsoft's recent unveiling of Phi-3-mini, a smaller AI model that can run locally on devices such as PCs and smartphones. According to various sources, including Microsoft itself and industry experts like Anton Korinek from the University of Virginia and Gil Luria from D.A. Davidson, this new local version of Copilot could revolutionize the way we use AI in our daily lives.

Microsoft's Phi-3-mini is a significant step forward in the field of AI, as it requires less processing power and data compared to larger models like GPT-4. This makes it an affordable option for individuals and businesses that cannot afford the high costs associated with larger AI systems. Moreover, local storage of data ensures that user information remains private and secure.

The new local version of Copilot is expected to be a game-changer in various industries, from automotive to healthcare. For instance, automakers can use Copilot for chatbot services in dealerships to answer customer queries about car models and features. In the healthcare sector, doctors and nurses can use Copilot for quick diagnosis and treatment recommendations based on patient data stored locally.

Microsoft's move towards smaller AI models is a response to the growing demand for affordable AI solutions that can run efficiently on edge devices. This trend is expected to continue as more companies explore the potential of local AI systems in various industries.

Despite Microsoft's reputation as a tech giant, it faces competition from other players in the market, including Google and OpenAI. However, Microsoft's focus on smaller AI models could give it an edge over its competitors by catering to a wider audience that cannot afford larger AI systems.

In conclusion, Microsoft's new local version of Copilot is an exciting development in the field of AI. It offers a more affordable and secure alternative to larger AI systems while maintaining high performance levels. As the demand for local AI solutions continues to grow, Microsoft's move towards smaller models could prove to be a smart business decision.



Confidence

91%

Doubts
  • Are there any specific benchmarks or tests that have been conducted on the performance of Copilot compared to larger AI systems?
  • What is the exact cost difference between Copilot and larger AI systems?

Sources

97%

  • Unique Points
    • Microsoft introduced three smaller A.I. models named Phi-3.
    • The smallest Phi-3 model can fit on a smartphone and run on regular computer chips.
    • Using the new models is 'substantially cheaper' than using larger models like GPT-4, according to Microsoft.
    • The smallest systems require less processing and may be less accurate or sound more awkward.
  • Accuracy
    • Microsoft may introduce a new AI model, Phi-3-mini, that can run locally on PCs
    • Phi-3 Mini has the capability of 10x larger models
  • Deception (100%)
    None Found At Time Of Publication
  • Fallacies (100%)
    None Found At Time Of Publication
  • Bias (100%)
    None Found At Time Of Publication
  • Site Conflicts Of Interest (100%)
    None Found At Time Of Publication
  • Author Conflicts Of Interest (100%)
    None Found At Time Of Publication

98%

  • Unique Points
    • Microsoft may introduce a new AI model, Phi-3-mini, that can run locally on PCs
    • Phi-3-mini has a small size and occupies less than 1.8GB of memory
    • It can achieve more than 12 tokens per second on an iPhone 14 with A16 Bionic chip
    • Microsoft plans to demonstrate a local version of Copilot at Microsoft Build in May
  • Accuracy
    • The smallest Phi-3 model can fit on a smartphone and run on regular computer chips.
    • Phi-3 Mini has the capability of 10x larger models and is licensed for both research and commercial usage.
    • NVIDIA accelerated Microsoft's new Phi-3 Mini open language model using NVIDIA TensorRT-LLM
  • Deception (100%)
    None Found At Time Of Publication
  • Fallacies (100%)
    None Found At Time Of Publication
  • Bias (95%)
    The author expresses a clear preference for a local version of Copilot over the cloud-based version, implying that local processing is necessary for privacy and security reasons. The author also states that Microsoft's new Phi-3-mini LLM is small enough to run locally on PCs with 1.8GB of extra memory, making it a viable option for many more users than the cloud-based version.
    • It's a familiar argument: if you issue a search request to Bing, Google Gemini, Claude, or Copilot, it lives in the cloud. This could be embarrassing (
      • Many people believe that Microsoft will eventually provide a version of Copilot that will run within Windows right on your PC.
      • Site Conflicts Of Interest (100%)
        None Found At Time Of Publication
      • Author Conflicts Of Interest (100%)
        None Found At Time Of Publication

      98%

      • Unique Points
        • Microsoft introduced its smallest AI model to date, called the Phi-3-mini.
        • ,
      • Accuracy
        • ]Microsoft introduced its smallest AI model to date, called the Phi-3-mini.[
        • The smallest Phi-3 model can fit on a smartphone and run on regular computer chips.
        • Microsoft may introduce a new AI model, Phi-3-mini, that can run locally on PCs
      • Deception (100%)
        None Found At Time Of Publication
      • Fallacies (95%)
        The author makes several comparisons between large and small AI models without explicitly stating that large models are superior to small ones. However, the context implies that larger models have more knowledge and capabilities than smaller ones. This is an example of a dichotomous depiction fallacy as the author oversimplifies the comparison between large and small AI models by presenting them as having only two distinct sizes with clear-cut differences. The author also uses rhetorical questions to appeal to the reader's emotions and assumptions, such as 'You want it to be fast, you want it to be quick.' This is an example of an informal fallacy called a rhetorical question fallacy. No other fallacies were found.
        • ]You can think of generative artificial intelligence systems as having different-sized brains.[/...]
        • [But sometimes you don’t need or want your AI system to be all-knowing.][...]
        • [There’s a lot happening in the world. Through it all, Marketplace is here for you.][...
      • Bias (100%)
        None Found At Time Of Publication
      • Site Conflicts Of Interest (100%)
        None Found At Time Of Publication
      • Author Conflicts Of Interest (100%)
        None Found At Time Of Publication

      98%

      • Unique Points
        • NVIDIA accelerated Microsoft’s new Phi-3 Mini open language model using NVIDIA TensorRT-LLM
        • ‗Phi-3 Mini has the capability of 10x larger models and is licensed for both research and commercial usage‗
        • Workstations with NVIDIA RTX GPUs or PCs with GeForce RTX GPUs can run the model locally
        • Phi-3 Mini was trained on 3.3 trillion tokens in seven days on 512 NVIDIA H100 Tensor Core GPUs
        • The model has two variants, one supporting 4k tokens and the other supporting 128K tokens for long contexts
        • Developers can try Phi-3 Mini with the 128K context window at ai.nvidia.com
        • Phi-3 Mini is compact enough to run efficiently on edge devices with only 3.8 billion parameters
      • Accuracy
        • Phi-3 Mini has the capability of 10x larger models and is licensed for both research and commercial usage
      • Deception (100%)
        None Found At Time Of Publication
      • Fallacies (95%)
        The article contains some inflammatory rhetoric and an appeal to authority, but no formal or informal fallacies were found. The author states that Phi-3 Mini has the capability of 10x larger models and was trained on 3.3 trillion tokens in only seven days on 512 NVIDIA H100 Tensor Core GPUs (inflammatory rhetoric). The author also mentions that NVIDIA is an active contributor to the open-source ecosystem and has released over 500 projects under open-source licenses, stating 'Today’s news expands on long-standing NVIDIA collaborations with Microsoft' (appeal to authority). However, these statements do not affect the validity of any arguments made in the article.
        • ] The model has 3.8 billion parameters and was trained on 3.3 trillion tokens in only seven days on 512 NVIDIA H100 Tensor Core GPUs.[
        • NVIDIA is an active contributor to the open-source ecosystem and has released over 500 projects under open-source licenses.
      • Bias (100%)
        None Found At Time Of Publication
      • Site Conflicts Of Interest (100%)
        None Found At Time Of Publication
      • Author Conflicts Of Interest (100%)
        None Found At Time Of Publication