## 2017년 7월 10일 월요일

### calculator

I'm looking for someone who can solve a problem with App Inventor2 (*, -, +, /). 9 + 1 + 3 = 13
And that also indicates, would be friendly if someone could help me :)

--

Yes, you can solve  9 + 1 + 3 = 13

--

### How many MB does a single sprite take up

How many MB does a single sprite take up?  I've heard tell that there's a limit of 100 MB for each App Inventor app.

--
If you use a sprite (on the canvas) , you have to upload one and you can see how big it is. Normally you have to use a paint or photo program to create it and then you can change the size.

this sprite is 163*309 pixels and the file size is 11kb (kilobyte) you can use 91 pcs and you have 1MB.

--
Thanks! After an app is uploaded, how many images from the device (gotten via filepath) can be shown at once?

--
In the documentation about using images, the last paragraph of "Out of memory errors" says :App Inventor lets you get  images (for example, from the Web) and import these into your app as Media.  When you use the images, and App Inventor will rescale them to fit the designated screen area in your app.    Sometimes the image to be displayed will be larger than the designed phone area.   Even so, the large image needs to be held in memory in order for rescaling to occur, even if the result of the rescaling will be a small image."
I had no idea images from the web would take up space.  So that means I couldn't bring in and show a lot of large size (or a whole lot of smaller size) images from the web?

Do images gotten via filepath from the device take up space in the app, too?  If so, I'm sunk.  I'm creating an app that will be displaying thousands of images from the web and/or from the device.

--
and import these into your app as Media

which means the documentation is talking about uploading files to the assets inside the app and not about displaying images directly from the web

if you want to display images directly from the web, then these should be not too large to reduce loading time...

it probably would be a good idea to do some example projects to test several functionality before working on the big project...

I think, the problem is to display several images at the same time... if you display them one by one or only a few at the same time, then you should not have memory problems...

--
Nice flung project.  I'm guessing canvas's flung handler won't work when the screen is scrollable, right?

I will do some experimenting, as you suggest.  Would there be a problem displaying a bunch of images from the device (via filepath) at the same time?

--
You will run into problems if you use many images or few images but very large images. Each Android process is allowed only so much memory (on earlier phones, it was 8 MB, nowadays 128 MB is typical, but it depends on the phone and Android version). The Sprite image above, while only being 11 KB on disk, will be roughly 200 KB in memory. This is because for the image to be drawn, it must be uncompressed from the JPG format and represented as a WxH grid of 4-byte pixels (alpha, red, green, blue) that is copied to the screen. To get a sense of how big each picture is to be held in memory, multiply the width and height together and then multiply that by 4. The App Inventor framework will attempt to scale images that are at least 2x the width and height of the screen automatically to reduce the memory load, but that won't protect you indefinitely from OutOfMemoryErrors. So ultimately the number of images you will be allowed to draw at one time will be a function of how big those images are uncompressed and the memory allowed per process on the user's phone.

--
Okay.  Wow.  Thank you so much.  Makes me wonder how all those image-filled video game apps are done.

So if I had a 35 by 35 px image, that would be 1,225... KB?  And if I had 12 of those that would be 14,700 KB.  Would that blow the average phone?

If I had 60 images 35 by 35, that would be 61,250 KB.  On a tablet would that be a whole lot?  I have no idea.

The user cropping the images to the size they'll be displayed should help, I think?

-Clueless

--
You're off on the math:

35 x 35 = 1225 pixels x 4 bytes/pixel = 4900 bytes (not KB!) = ~5 KB
If you have 60 of those: 4900 x 60 = 294000 bytes (not KB!) = ~288 KB

Cropping or resizing will reduce the amount of memory used because it makes the image smaller. Simply compressing by switching to JPG or PNG will not help runtime size (although it does help keep you below the 10 MB App Inventor limit).

--
That's not so bad after all (maybe).

--
(added to new Limits section of FAQ FAQ for Limits )

--

### Android video encoding with fr and resolution manipulation

I want to be able to take a video recorded with an Android device and encode it to a new Resolution and Frame Rate using my app. The purpose is to upload a much smaller version of the original video (in size), since this will be videos 30 min long or more.
So far, I've read of people saying FFmpeg is they way to go. However, the documentation seems to be lacking.
I have also considered using http opencv http://opencv.org/platforms/android.html
Considering I need to manipulate the video resolution and frame rate, which tool do you think can do such things better? Are there any other technologies to consider?
An important question is, since this will be long videos, is it reasonable to do the encoding in an android device (Consider power resources, time, etc.)

--
I decided to use ffmpeg to tackle this project. After much researching and trials, I was not able to build ffmpeg for library (using Ubuntu 14.04 LTS.)
However, I used this excellent library https://github.com/guardianproject/android-ffmpeg-java I just created a project and added that library and it works like a charm. No need to build your own files or mess with the Android NDK. Of course you would still need to build the library yourself if you want to customize it. But it has everything I need.
Here is an example of how I used to lower a video resolution and change the frame rate:
`````` @Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
// input source
final Clip clip_in = new Clip("/storage/emulated/0/Developer/test.mp4");

Activity activity = (Activity) MainActivity.this;
File fileTmp = activity.getCacheDir();

final Clip clip_out = new Clip("/storage/emulated/0/Developer/result2.mp4");
//put flags in clip
clip_out.videoFps = "30";
clip_out.width = 480;
clip_out.height = 320;
clip_out.videoCodec = "libx264";
clip_out.audioCodec = "copy";

try {
FfmpegController fc = new FfmpegController(fileTmp, fileAppRoot);
fc.processVideo(clip_in, clip_out, false, new ShellUtils.ShellCallback() {

@Override
public void shellOut(String shellLine) {
System.out.println("MIX> " + shellLine);
}

@Override
public void processComplete(int exitValue) {

if (exitValue != 0) {
System.err.println("concat non-zero exit: " + exitValue);
Log.d("ffmpeg","Compilation error. FFmpeg failed");
Toast.makeText(MainActivity.this, "result: ffmpeg failed", Toast.LENGTH_LONG).show();
} else {
if(new File( "/storage/emulated/0/Developer/result2.mp4").exists()) {
Log.d("ffmpeg","Success file:"+ "/storage/emulated/0/Developer/result2.mp4");
}
}
}
});
} catch (FileNotFoundException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
// automated try and catch
setContentView(R.layout.activity_main);
}
}``````
The function `processVideo` produces a command similar to `ffmpeg -i input -s 480X320 -r 30 -vcodec libx264 -acodec copy output`
This a very simple example, but it outputted the same kind of conversion done by ffmpeg desktop. This codes needs lots of work! I hope it helps anyone.

--

### Is it possible to compress video on Android?

I want to do video compression.
Actually in my application I want to have two options, one is low and another is high. If I choose the low option then the application will compress the video and then upload it. If I choose high then it will upload the original video which I have recorded itself.
I want to do this thing in my application and I'm confused. I have searched google a lot but I'm not able to find the useful way to solve this question please if anyone can help me out.

--
I used ffmpeg4android,
And was able to achieve this in about 10min using only java, Note that its a commercial library.

--
Yes in android you can us ffmpeg4android for compress video it's native library
Install the Android NDK First to Use it

--
My answer maybe late but better than never, I wanna share my experience of compressing video. I have a large 10-minute MOV video that I wish to send to a friend through email, but the file is too large for most email clients so I downloaded a free program.
I think this must be suitable for you too, if you still need, you also can try, easy to operate with three steps to compress videos.

--

### Resize video

Hi experts, Is there a way to resize (not cropping) the video captured in app inventor2 or compress the video to upload it to a server ?

--
you could write your own extension and create a block yourself...

however that will be more advanced and will require some Java skills...

snippets:

--

### app crash on launch

--
see the following general tips and in your case especially #2

1. Use different screens wisely
Before starting to create another screen, first you should think about is it really necessary? See also Building apps with many screens and SteveJG's post about advantages/disadvantagesbecause in only one screen you also can use vertical arrangements to simulate different screens, just set the arrangements to visible = true/false as needed... See also Martyn_HK's example about how to use Tabs in App Inventor and another example from Cyd

2. App Inventor works best if you use images whose size matches the size you want them to appear on your screen. If you import larger images into your app, your app may run out of system memory. Using Images with App Inventor

3. Avoid redundancy
Probably it helps to read chapter 19 - 21 in Dave's book http://www.appinventor.org/book2 to get an idea how to do DRY programming with App Inventor Don't repeat yourself

4. See SteveJG's

to find out more about the Runtime Error, you can use Logcat
I normally use Eclipse and Logcat there, but if you have installed the App InventorSoftware (see also http://appinventor.mit.edu/explore/ai2/setup-emulator.html), you already have everything you need to use logcat...

How to use Logcat

2. in File Manager go to the App Inventor directory, which is  C:\Program Files\App Inventor or similar
3. press Shift and right mouse click the subdirectory commands-for-Appinventor to get the context menu
4. select "open command window here" and you will get a command window of that subdirectory
5. enter adb logcat and the logcat will start running
6. start your app to elicit the error
7. copy the log (see below)

To copy your log, right click, click "select all" and enter to copy the complete log into the clipboard, then open Notepad and paste it using ctrl-v.

--

### Reading a cell value in CSV File

Is supposed that the Backpack will be emptied when I exit App Inventor.  When I restart AppInventor, my Backpack should be empty. But, It´s not. What can I do?

--
In the latest AI2 update, the backpack is persistent. Which means that it will always have your blocks in it until you manually empty it.

Changes between nb154 and nb154a (February 15, 2017)
⦁ Make the Backpack persistent – If you leave MIT App Inventor with blocks left in your backpack, they will be there the next time you login.

--
We changed this behavior a few weeks ago. Before, the backpack was per session and so when you refreshed it was initialized empty. Now, we store the contents of the backpack in your user account so you can use them longer term. If you would like to clear your backpack, you can do so by right-clicking (ctrl+click on Mac) on the blocks workspace and select the "Empty the backpack" menu option.

--

### What is the BLE advertising type from BLE module?

I am testing BLE advertising function on Samsung S7 by using BLE extension and the advertising is sent out, on the other end, the scanner can see this advertising with device info i.e."Samsung S7" etc, but the advertising type is recognized as NON_CONNECTABLE_UNDIRECTED, which means it is not connectable. I tried to connect with advertiser anyway, as expected, definitely it failed due to timeout. So the question is that do you know what the advertise type is set to? or could it be due to my Samsung S7 setting?

Any suggestions are highly appreciated.

--
Most phones cannot act as BLE servers, only as clients. That may explain the behavior of your phone.

--
thanks for quick response!

--

### GPS-time in Android

I'm looking for a way to display GPS-time in my app. What is the easiest way to do this?
(It has to be GPS-time due to slight time differences between gps time and internal system time)

--
Whenever you receive a fix from the GPS, the method `onLocationChanged` is called. The parameter to this method is an instance of `Location`. One of the methods of this parameter is `getTime` which will give you what you are looking for. That is if I got right what you are looking for. Read more here

--
Getting the GPS time can be rather confusing! To extend discussions in accepted answer, getTime() in onLocationChanged() callback gives different answers depending on how the location (not necessarily GPS) information is retrieved, (based on Nexus 5 testing):
(a) If using Google FusedLocationProviderApi (Google Location Services API) then getProvider() will return 'fused' and getTime() will return devices time (System.currentTimeMillis())
(b) If using Android LocationManager (Android Location API), then, depending on the phone's 'location' settings and requestLocationUpdates settings (LocationManager.NETWORK_PROVIDER and/or LocationManager.GPS_PROVIDER), getProvider() will return:
⦁ Either 'network', in which case getTime() will return the devices time (System.currentTimeMillis()).
⦁ Or, 'gps', in which case getTime will return the GPS (satellite) time.
Essentially: 'fused' uses GPS & Wi-Fi/Network, 'network' uses Wi-Fi/Network, 'gps' uses GPS.
Thus, to obtain GPS time, use the Android LocationManager with requestLocationUpdates set to LocationManager.GPS_PROVIDER. (Note in this case the getTime() milliseconds part is always 000)
Here is an example using Android LocationManager (Android Location API):
``````public void InitialiseLocationListener(android.content.Context context) {

android.location.LocationManager locationManager = (android.location.LocationManager)
context.getSystemService(android.content.Context.LOCATION_SERVICE);

android.location.LocationListener locationListener = new android.location.LocationListener() {

public void onLocationChanged(android.location.Location location) {

String time = new java.text.SimpleDateFormat("dd/MM/yyyy HH:mm:ss.SSS").format(location.getTime());

if( location.getProvider().equals(android.location.LocationManager.GPS_PROVIDER))
android.util.Log.d("Location", "Time GPS: " + time); // This is what we want!
else
android.util.Log.d("Location", "Time Device (" + location.getProvider() + "): " + time);
}

public void onStatusChanged(String provider, int status, android.os.Bundle extras) {
}
public void onProviderEnabled(String provider) {
}
public void onProviderDisabled(String provider) {
}
};

if (android.support.v4.content.ContextCompat.checkSelfPermission(context,
android.Manifest.permission.ACCESS_FINE_LOCATION) != android.content.pm.PackageManager.PERMISSION_GRANTED) {
android.util.Log.d("Location", "Incorrect 'uses-permission', requires 'ACCESS_FINE_LOCATION'");
return;
}