-
Notifications
You must be signed in to change notification settings - Fork 22
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Explain details about the files #3
Comments
Sure, all of the models' files reside inside of data/model-development. For the model, I just ran final_model_1.py x3 times and final_model_2.py x2 times for 5 total .h5 models. That's it when it comes to the deep learning part. After that, I imported these model files into my Django web backend project inside handwriting/utils/models/. In the backend, I did some manipulation with incoming images that come from the frontend, and made predictions using these .h5 files. I chose React as the web frontend to communicate these images to Django. The React frontend communicates with the backend using blobs (which were once images of the canvas). Eventually I get a response from the backend on what it predicts and display the result on the frontend. There is a lot of web development involved in this project, which is more of my strong suit. The actual AI part is just a small section of the project (just the x5 .h5 model files that are used for prediction). |
Thanks for reply
But can give me some guide where I can start this project for my own
purpose .
…On Tue, Nov 24, 2020 at 2:24 AM Michael McCabe ***@***.***> wrote:
Sure, all of the models' files reside inside of data/model-development.
For the model, I just ran final_model_1.py x3 times and final_model_2.py x2
times for 5 total models. That's it when it comes to the deep learning part.
After that, I imported these model files into my Django web backend
project. In the backend, I did some manipulation with incoming images that
come from the frontend, and made predictions using these .h5 files.
I chose React as the web frontend to communicate these images to Django.
The React frontend communicates with the backend using blobs (which were
once images of the canvas). Eventually I get a response from the backend on
what it predicts and display the result on the frontend.
There is a lot of web development involved in this project, which is more
of my strong suit. The actual AI part (with models and everything) is just
a small section of the project.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#3 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/APSXPBQ2RGBP2FY3IKNAA4LSRLAH5ANCNFSM4T7D77MQ>
.
|
I send the canvas image blob over to Django inside the /src/components/Canvas/Canvas.js React component and I receive the blob at the corresponding Django POST endpoint: which is in /handwriting/views.py That might help you out for where the connection is to start you off |
Can you describe the model's files I mean what files put where and how it's linked and how the overall path of files
The text was updated successfully, but these errors were encountered: