2017년 7월 8일 토요일

Using webview for Talking Communication App


I am creating a communication app.

The screen is a canvas.

Sprites can be set to images and added to the canvas.  The sprites can be given words/phrases to say (or audio clips to play) when clicked.

With App Inventor, I cannot have a new sprite created every time a "Add new" button is pressed.  So the largest number of sprites that can be added to the canvas is the number of sprites I drag into the app manually.

Since this is a talking communication app, this is a very great drawback.

Recently I have found out about creating a HTML file, uploading it as an asset, and going to the webpage using the webview component.

Can — and should — this program be coded in entirely, or partly, in HTML?  I know that using JavaScript or jQuery you can make a new element, such as a button, when a button is pressed.  So that would take away the spite limit restriction.

But I'd also need to access and use tinyDB, text-to-speech, audio player, and more from the webpage.  I'd need to be able to get images from the Internet (<img src="http://..." would work, I think), get images from the device (using filepath), and get audio clips from the device (using filepath).

Is all this possible in App Inventor and HTML/CSS/jQuery/JavaScript?  And if so, would some of you be willing to help me as I (try to) create this app?  It's for a good cause: Giving special needs people (especially children) a way to communicate.  I know many people who would benefit from this app.

P.S.  The things described in this post are only the very most basic features I want to offer in this app.  Ask, and you shall receive more details.


Min Sullivanminsullivan@sullivandesigns.us


댓글 없음:

댓글 쓰기