Skip to content

Latest commit

 

History

History

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 
 
 
 
 
 
 

Getting Started with Notebooks

Jupyter Notebooks Demos for Phi3 and GPT4o

01.Phi3_Instruct.ipynb & 02.Phi3_Vision.ipynb:

These notebooks are ready to run. Simply open them in your preferred Jupyter notebook environment and execute the cells step-by-step.

They will guide you through the basic functionalities and capabilities of Phi-3.

03.GPT4o_Vision:

For this notebook, you need to have an Azure OpenAI Service Subscription or access to GitHub Models.

This notebook will demonstrate the capabilities of GPT-4o in understanding and processing images.

Setting Up Models

Azure OpenAI Service Subscription

Create an Azure Account: If you don’t have an Azure account, you can create one here.

Subscribe to Azure OpenAI Service:

Navigate to the Azure portal and search for “OpenAI”. Follow the instructions to subscribe to the Azure OpenAI Service.

Get Your API Key:

Once subscribed, you will receive an API key. Keep this key safe as you will need it to access the service.

Get your GitHub Model Catalog Access and Personal Token

This services is FREE but currently in Preview you can request access at GitHub Marketplace Models Personal token is not required if using the codespaces environment

Running the Notebooks

Open the Notebooks:

Use Jupyter Notebook, JupyterLab, or any other compatible environment to open the .ipynb files.

Execute the Cells:

Run each cell in the notebooks sequentially. The cells contain code and instructions that will guide you through the process. Comparing Phi-3-Vision and GPT-4o

Run the Phi-3-Vision Notebook:

Follow the instructions in the 02.Phi3_Vision.ipynb notebook to see how Phi-3 processes and understands images.

Run the GPT-4o Vision Notebook:

Use the 03.GPT4o_Vision notebook to see how GPT-4o handles image understanding. Make sure you have your Azure OpenAI Service API key or GitHub Models access configured.

Compare the Results:

After running both notebooks, compare the outputs. You will notice that Phi-3-Vision also has strong capabilities in understanding code and images, similar to GPT-4o.