These notebooks are ready to run. Simply open them in your preferred Jupyter notebook environment and execute the cells step-by-step.
They will guide you through the basic functionalities and capabilities of Phi-3.
For this notebook, you need to have an Azure OpenAI Service Subscription or access to GitHub Models.
This notebook will demonstrate the capabilities of GPT-4o in understanding and processing images.
Create an Azure Account: If you don’t have an Azure account, you can create one here.
Navigate to the Azure portal and search for “OpenAI”. Follow the instructions to subscribe to the Azure OpenAI Service.
Once subscribed, you will receive an API key. Keep this key safe as you will need it to access the service.
This services is FREE but currently in Preview you can request access at GitHub Marketplace Models Personal token is not required if using the codespaces environment
Use Jupyter Notebook, JupyterLab, or any other compatible environment to open the .ipynb
files.
Run each cell in the notebooks sequentially. The cells contain code and instructions that will guide you through the process. Comparing Phi-3-Vision and GPT-4o
Follow the instructions in the 02.Phi3_Vision.ipynb
notebook to see how Phi-3 processes and understands images.
Use the 03.GPT4o_Vision
notebook to see how GPT-4o handles image understanding.
Make sure you have your Azure OpenAI Service API key or GitHub Models access configured.
After running both notebooks, compare the outputs. You will notice that Phi-3-Vision also has strong capabilities in understanding code and images, similar to GPT-4o.