Home

News (55) Tech (35) Economy (8) Feature (8) ShortStory (7) Education (5) Column (4) Health (4) Research (3) About Us (1)

Saturday, 14 December 2024

Microsoft Unveils Phi-4, a Powerful AI Model for Research







Microsoft, a multinational corporation that develops, supports and sells computer software and services, has graciously announced Phi-4, a small yet powerful new generative Artificial Intelligence (AI) model for research preview. 


Phi-4 reportedly comes with 14 billion parameters, and is positioned as a small yet powerful model that is said to ‘excel’ in specialized tasks, particularly mathematical reasoning.


In its released technical report, the tech giant said, “We present phi-4, a 14-billion parameter language model developed with a training recipe that is centrally focused on data quality. Unlike most language models, where pre-training is based primarily on organic data sources such as web content or code, phi-4 strategically incorporates synthetic data throughout the training process. While previous models in the Phi family largely distill the capabilities of a teacher model, specifically GPT-4o.


Phi-4 substantially surpasses its teacher model on STEM-focused QA capabilities, giving evidence that our data-generation and post-training techniques go beyond distillation. Despite minimal changes to the phi-3 architecture, phi-4 achieves strong performance relative to its size– especially on reasoning-focused benchmarks– due to improved data, training curriculum, and innovations in the post-training scheme.” 


Currently, the model is available under a limited release, mostly for research purposes through the company’s Azure AI Foundry platform. It is touted to come with the ability to outperform much larger models, including Google’s Gemini Pro 1.5 and OpenAI’s GPT-4o, on tasks that require complex reasoning. This is evident in the model’s ability to solve mathematical problems, a feature that Microsoft has heavily emphasized in its rollout of Phi-4. 

Presently, larger models like GPT-4 and Gemini Ultra are built with hundreds of billions, or even trillions, of parameters. Phi-4, on the other hand, aims to achieve the results with far fewer computational resources. 


Microsoft attributes Phi-4’s strong performance to the use of “high-quality synthetic datasets” alongside data from human-generated content, while maintaining lower computational costs. 



Phi-4 was trained on synthetic datasets that were specifically crafted to provide diverse, structured problem-solving scenarios. These datasets were supplemented by high-quality human-generated content to ensure that the model encountered a wide range of real-world scenarios during training techniques. 



Once Phi-4 is made available to a wider user base, it could prove to be an eye-opener for mid-sized companies and organizations with limited computing resources. 


By keeping costs significantly lower, when compared to large-scale AI models, Phi-4 can free up resources that can be directed toward other avenues. This could benefit enterprises that have hesitated to adopt AI solutions due to the high resource demands of larger models.

Google Commences Germini 2.0 Flash Experimentation




The Tech giant, Google has announced the launch of Gemini 2.0 Flash and its associated research prototype. It is believes that this is a step forward in the field of Artificial Intelligence (AI) as the new model is designed for developers and showcases several experimental agent-based applications.


Building upon the success of its predecessor, 1.5 Flash, Gemini 2.0 Flash offers enhanced performance with similarly rapid response times. Notably, it outperforms 1.5 Pro on key benchmarks while operating at twice the speed.


Beyond speed improvements, 2.0 Flash introduces new capabilities, including support for multimodal inputs like images, video, and audio, as well as multimodal outputs such as generated images and multilingual text-to-speech audio. It also integrates with external tools like Google Search and code execution environments.


This model is currently available to developers in an experimental phase through the Gemini API in Google AI Studio and Vertex AI, with broader availability planned for January.


Beyond the core model improvements, the announcement highlights the development of AI agents. These agents represent a new approach to interacting with technology, focusing on task completion and proactive assistance. Several research prototypes demonstrate this concept, such as:


  • Project Astra: This project explores the concept of a universal AI assistant capable of understanding multiple languages, utilizing tools like Google Search and Maps, and maintaining context over longer conversations. It has seen improvements in dialogue, tool use, memory, and latency.

  • Project Mariner: This prototype focuses on browser-based agent interaction. It aims to understand and interact with web content, including text, images, and code.

  • Jules: This is an AI-powered code agent designed to assist developers within a GitHub workflow. It can analyze issues, develop plans, and execute code under developer supervision.


  • Gaming Agents: These agents are designed to interact with video games, offering real-time suggestions and utilizing Google Search for game-related information.

  •  There’s also exploration of Gemini 2.0’s spatial reasoning for robotics applications.

A strong emphasis is placed on responsible AI development. The development team is actively addressing safety and security concerns through various measures, including internal reviews, red teaming, safety training, and collaboration with external experts. Specific examples include mitigations against unintentional data sharing in Project Astra and protection against prompt injection in Project Mariner.



The release of Gemini 2.0 Flash and the associated agent prototypes marks an important advancement in AI. The focus on performance, multimodality, and agent-based interaction, combined with a commitment to responsible development, positions Gemini as a key player in the ongoing AI evolution.




Friday, 13 December 2024

Tinubu Appoints Nwakuche As Acting CG of NCoS, as Nababa Bows Out

President Bola Tinubu has approved the appointment of Mr. Sylvester Ndidi Nwakuche, MFR as the acting Controller General (CG) of the Nigerian Correctional Service (NCoS). 

This followed the expiration of the tenure of the outgoing CG, Mr. Haliru Nababa. 

The appointment was announced in a statement issued by the Secretary to the Civil Defence, Immigration, Fire Service and Correctional Service Board, (CDCFIB), Mr. Ja’afaru Ahmed, on Friday, December 13, 2024. He noted that the appointment takes effect from Sunday, December 15.

Mr. Ahmed disclosed that Mr. Nwakuche’s appointment was a testament to his wealth of experience and dedication to the Service. 

He stated that Chief Tinubu charged the new NCoS boss to bring his wealth of experience to bear in his new capacity and ensure the continued transformation of the service.

Mr. Nwakuche, who hails from Oguta LGA in Imo State and was born on November 26, 1966, until his appointment, was the Deputy Controller General of NCoS in charge of Training and Staff Development Directorate, where he played a crucial role in shaping the training and development policies of the service. 

He is a Fellow of the prestigious National Institute for Policy and Strategic Studies (NIPSS), as well as a well-decorated and notable officer, who holds the national honour of Member of the Federal Republic (MFR).

Featured post

Google Commences Germini 2.0 Flash Experimentation

  The Tech giant, Google has announced the launch of Gemini 2.0 Flash and its associated research prototype. It is believes that this is...

MyBlog

Language Translation

ARCHIVE