Quantcast
Channel: Intel Developer Zone Articles
Viewing all 554 articles
Browse latest View live

Optimizing Android* Game mTricks Looting Crown on the Intel® Atom™ Platform

$
0
0

Abstract

Games for smartphones and tablets are the most popular category on app stores. In the early days, mobile devices had significant CPU and GPU constraints that affected performance. So most games had to be simple. Now that CPU and GPU performance has increased, more high-end games are being produced. Nevertheless, a mobile processor still has less performance than a PC processor.

With the growth in the mobile market, many PC game developers are now making games for the mobile platform. However, traditional game design decisions and the graphic resources of a PC game are not a good fit for mobile processors and may not perform well. This article shows how to analyze and improve the performance of a mobile game and how to optimize graphic resources for a mobile platform, using mTricks Looting Crown as an example. The looting crown IA version is now released with the following link.

https://play.google.com/store/apps/details?id=com.barunsonena.looting

mTricks Looting Crown
Figure 1. mTricks Looting Crown

1. Introduction

mTricks has significant experience in PC game development using a variety of commercial game engines. While planning its next project, mTricks forecasted that the mobile market was ready for a complex MMORPG, given the performance growth of mobile CPUs and GPUs. So it changed the game target platform for its new project from the PC to mobile.

mTricks first ported the PC codebase to Android*. However, the performance was less than expected on the target mobile platforms, including an Intel® Atom™ processor-based platform (code named Bay Trail).

mTricks was encountering two problems that often face PC developers who transition to mobile:

  1. The low processing power of the mobile processor means that traditional PC graphic resources and designs are unsuitable.
  2. Due to capability and performance variations among mobile CPUs and GPUs, game display and performance vary on different target platforms.

2. Executive summary

Looting Crown is SNRPG (Social Network + RPG) style game, supporting full 3D graphics and various multi-play modes (PvP, PvE and Clan vs Clan). mTricks developed and optimized on a Bay Trail reference design, and the specification is listed in Table 1.

Table 1. Bay Trail reference design specification and 3DMark score

 Bay Trail reference design 10”
CPUIntel® Atom™ processor Quad Core 1.46 Ghz
RAM2GB
Resolution2560 x 1440
3DMark ICE Storm Unlimited Score15,094
Graphics score13,928
Physics score21,348

mTricks used Intel® Graphics Performance Analyzers (Intel® GPA) to find CPU and GPU bottlenecks during development and used the analysis to solve issues of graphic resources and performance.

The baseline performance was 23 fps, and Figure 2 shows GPU Busy and Target App CPU Load statistics during a 2 minute run. The average of GPU Busy is about 91%, and the Target App CPU Load is about 27%.

Intel® GPA System Analyzer
Figure 2. Comparing CPU and GPU load of the baseline version with Intel® GPA System Analyzer

3. Where is the bottleneck between CPU and GPU?

There are two ways to know where the bottleneck is between CPU and GPU. One is to use an override mode, and the other is to change CPU frequency.

Intel GPA System Analyzer provides the “Disable Draw Calls” override mode to help developers find where the bottleneck is between CPU and GPU. After running this override mode, compare each result with/without the override mode and check the following guidelines:

Table 2. How to analyze games with Disable Draw Calls override mode

Performance change for “Disable Draw Calls” override modeBottleneck
If FPS doesn’t change muchThe game is CPU bound; use the Intel® GPA Platform Analyzer or Intel® VTune™ Amplifier to determine which functions are taking the most time
If FPS improvesThe game is GPU bound; use the Intel GPA Frame Analyzer to determine which draw calls are taking the most time

Intel GPA System Analyzer can simulate the application performance with various CPU settings, which is useful for bottleneck analysis. To determine whether your application performance is CPU bound, do the following:

  1. Verify that your application is not Vertical Sync (Vsync) bound.
    Check the Vsync status. Vsync is enabled if you see the gray highlight  mTricks vsync in the Intel GPA System Analyzer Notification pane.
    • If Vsync is disabled, proceed to step 2.
    • If Vsync is enabled, review the frame rate in the top-right corner of the Intel GPA System Analyzer window. If the frame rate is around 60 FPS, your application is Vsync bound, and there is no opportunity to increase FPS. Otherwise, proceed to step 2.
  2. Force a different CPU frequency using the sliders in the Platform Settings pane (Figure 3) of the Intel GPA System Analyzer window. If the FPS value changes when you modify the CPU frequency, the application is likely to be CPU bound.

Platform Settings pane
Figure 3. Modify the CPU frequency in the Platform Settings pane

Table 3 shows the simulation results for Looting Crown. With “Disable Draw Calls” override on, the FPS remained unchanged. This would normally indicate the game was CPU bound. However, the “Highest CPU freq” override also didn’t change FPS, implying that Looting Crown was GPU bound. To resolve this, we returned to the data in Figure 2, which showed that the GPU load was about 91% and CPU load was about 27% on the Bay Trail device. The CPU could not be utilized well due to the GPU bottleneck. We proceeded with the plan to optimize the GPU usage first and then retest.

Table 3. The FPS result of the baseline version with Disable Draw Calls and Highest CPU Frequency.

Bay Trail deviceFPS
Original23
Disable Draw Calls23
Highest CPU freq.23

4. Identifying GPU bottlenecks

We found that the performance bottleneck was in the GPU. As a next step, we analyzed the cause of the GPU bottleneck with Intel GPA Frame analyzer. Figure 4 shows the captured frame information of the baseline version.

 Intel® GPA Frame Analyzer
Figure 4. Intel® GPA Frame Analyzer view of the baseline version

4.1 Decrease the number of draw calls by merging hundreds static mesh into one static mesh and using bigger texture.

4 and 5 show the information captured by Intel GPA Frame analyzer.

Table 4. The captured frame information of the baseline version

Total Ergs1,726
Total Primitive Count122,204
GPU Duration, ms23 ms
Time to show frame, ms48 ms

Table 5. Draw call cost of the baseline version

TypeErgTime(ms)%
Clear00.2 ms0.5 %
Ocean16 ms13.7 %
Terrain2~97720 ms41.9 %
Grass19~97718 ms39.0 %
Character, building and effect978~167619 ms40.6 %
UI1677~17251 ms3.4 %

Total time of “Terrain” is 20 ms while the time of “Grass” in the “Terrain” is 18 ms. It’s about 90% of “Terrain” processing time. So we analyzed further to see why it takes a lot of time for “Grass” processing.

Figures 5 and 6 show the output of the ergs for “Terrain” and “Grass”.

the terrain
Figure 5. The terrain

texture of grass
Figure 6. Texture of “Grass”

Looting Crown drew the terrain by drawing a small grass quad repeatedly. So the number of draw calls in “Terrain” was 960. The drawing time of one small grass is very small; however, the draw call itself has overhead, which makes it an expensive operation. So we recommended to decrease the number of draw calls by merging hundreds of static mesh into one static mesh and using bigger texture. Table 6 shows the changed result.

Table 6. Comparison of draw cost between small and big texture

Small texture, ms18 ms
Number of ergs960
Big texture, ms6 ms
Number of ergs1

the changed terrain
Figure 7. The changed terrain

Though we simplified, the tile-based terrain required a lot of draw calls, so we decreased the number of draw calls and saved 12 ms on drawing the “Grass”.

4.2 Optimizing graphics resources

Tables 7 and 8 show the new information captured by Intel GPA Frame analyzer after applying the big texture for grass.

Table 7. The captured frame information of the 1st optimization version

Total Ergs179
Total Primitive Count27,537
GPU Duration, ms24 ms
Time to show frame, ms27 ms

Table 8. Draw call cost of the 1st optimization version

TypeErgTime(ms)%
Clear02 ms10.4 %
Ocean186 ms23.6 %
Terrain1~17, 19, 23~9614 ms54.3 %
Grass196 ms23.2 %
Character, building and effect20~22, 97~1311 ms5.9 %
UI132~1781 ms5.7 %

We checked if the game is still GPU bound. We did the same measurement with “Disable Draw Calls” and “Highest CPU Frequency” simulation.

Table 9. The FPS result of 1st optimization version with “Disable Draw Calls” and “Highest CPU Frequency”

Bay Trail deviceFPS
Original40
Disable Draw Calls60
Highest CPU freq.40

In Table 9, “Disable Draw Calls” simulation increased the FPS number while “Highest CPU Frequency” simulation didn’t change the FPS number. So, we knew Looting Crown was still GPU bound. And we also checked CPU load and GPU Busy again.

 Intel® GPA System Analyzer
Figure 8. CPU and GPU load of the 1st optimization version with Intel® GPA System Analyzer

Figure 8 shows GPU load is about 99% and CPU load is about 13% on Bay Trail. CPU still could not be a source of speedup due to GPU bottleneck on Bay Trail.

Looting Crown was originally developed for PCs, so the existing graphic resources were not suitable for mobile devices, which have lower GPU and CPU processing power. We did several optimizations to the graphic resources as follows.

  1. Minimizing Draw Calls
    1. Reduced the number of materials: The number of object materials was reduced from 10 to 2.
    2. Reduced the number of particle layers.
  2. Minimizing the number of polygons
    1. Applied LOD (level of detail) for characters using the “Simplygon” tool.
      progressively reduced LOD

      Figure 9. A character with progressively reduced LOD

    2. Minimized number of polygons used for terrain: First, we minimized the number of polygons for faraway mountains that did not require much detail. Second, we minimized the number of polygons for flat terrain that could be represented by two triangles.
  3. Using optimized light maps
    1. Removed the dynamic lights for “Time of Day”.
    2. Minimized the light map size of each mesh: Reduced the number of light maps used for the background.
  4. Minimizing the changes of render states
    1. Reduced the number of materials, which also reduced render state changes and texture changes.
  5. Decoupling the animation part in static mesh
    1. Havok engine didn’t support a partial update of an animated part of an object. An object with only a small moving mesh was being updated even for the static mesh part of the object. So, we separated the animated part (smoke, red circle on Figure 10) from the rest of the object, dividing it into two separate object models.

decoupled animation
Figure 10. Decoupled animation of the smoke from the static mesh

4.3 Apply Z-culling efficiently

When an object is rendered by the 3D graphics card, the three-dimensional data is changed into two-dimensional data (x-y), and the Z-buffer or depth buffer is used to store the depth information (z coordinate) of each screen pixel. If two objects of the scene must be rendered in the same pixel, the GPU compares the two depths. The GPU overrides the current pixel if the new object is closer to the observer. So Z-buffer will reproduce the usual depth perception correctly. The process of Z-culling is drawing the closest objects first so that a closer object hides a farther one. Z-culling provides performance improvement on rendering of hidden surfaces.

In Looting Crown, there were two kinds of terrain drawing: Ocean drawing and Grass drawing. Because large portions of ocean were behind grass, lots of ocean areas were hidden. However, the ocean was rendered earlier than grass, which prevented efficient Z-culling. Figures 11 and 12 show the GPU duration time of drawing ocean and grass, respectively; erg 18 is for ocean and erg 19 is for grass. If grass is rendered before ocean, then the depth test would indicate that the ocean pixels would not need to be drawn. It would result in decreased GPU duration of drawing ocean. Figure 13 shows the ocean drawing cost on the second optimization. The GPU duration decreased from 6 ms to 0.3 ms.

ocean drawing cost first optimization
Figure 11. Ocean drawing cost of 1st optimization

grass drawing cost of first optimization
Figure 12. Grass drawing cost of 1st optimization

Ocean draw cost of second optimization
Figure 13. Ocean draw cost of 2nd optimization

Results

By taking these steps, mTricks changed all graphics resources to be optimized for mobile device without compromising graphics quality. Erg numbers were decreased from 1,726 to 124; Primitive count was decreased from 122,204 to 9,525.

mTricks Looting Crown
Figure 14. The change of graphics resource

Figure 15 and Table 10 show the outcome of all these optimizations. After optimizations, FPS changed from 23 FPS to 60 FPS on the Bay Trail device.

FPS Increase
Figure 15. FPS Increase

Table 10. Changed FPS, GPU Busy, and App CPU Load

 Baseline1st Optimization2nd Optimization
FPS23 FPS45 FPS60 FPS
GPU Busy(%)91%99%71%
App CPU Load(%)27%13%22%

After the first optimization, Bay Trail still was GPU bound. We did the second optimization to reduce the GPU workload by optimizing the graphic resources and z-buffer usage. Finally the Bay Trail device hit the maximum (60) FPS. Because Android uses Vsync, 60 FPS is the maximum performance on the Android platform.

Conclusion

When you start to optimize a game, first determine where the application bottleneck is. Intel GPA can help you do this with some powerful analytic tools.If your game is CPU bound, then Intel VTune Amplifier is a helpful tool. If your game is GPU bound, then you can find more detail using Intel GPA.To fix GPU bottlenecks, you can try to find an efficient way of reducing draw calls, polygon count, and render state changes. You can also check the right size of terrain texture, animation objects, light maps, and the right order of z-buffer culling.

About the Authors

Tai Ha is an application engineer focusing on enabling online games in APAC region. He has been working for Intel since 2005 covering Intel® Architecture optimization on Healthcare, Server, Client, and Mobile platforms. Before joining Intel, Tai worked for biometric companies based in Santa Clara, USA as a security middleware architect since 1999. He received his BS in Computer Science from Hanyang University, Korea.

Jackie Lee is an Applications Engineer with Intel's Software Solutions Group, focused on performance tuning of applications on Intel® Atom™ platforms. Prior to Intel, Jackie Lee worked at LG in the electronics CTO department. He received his MS and BS in Computer Science and Engineering from The ChungAng University.

References

The looting crown IA version is now released on Google Play:

https://play.google.com/store/apps/details?id=com.barunsonena.looting

Intel® Graphics Performance Analyzers
https://software.intel.com/en-us/vcsource/tools/intel-gpa

Havok
http://www.havok.com

mTricks
https://www.facebook.com/mtricksgame

Intel, the Intel logo, and Atom are trademarks of Intel Corporation in the U.S. and/or other countries.
Copyright © 2014 Intel Corporation. All rights reserved.
*Other names and brands may be claimed as the property of others.


Intel® System Studio - Solutions, Tips and Tricks

$
0
0

The alternatives for Intel® IPP legacy Generated Transforms domain

$
0
0

Starting with Intel® Integrated Performance Primitives (Intel® IPP) 9.0, the Intel® IPP Generated Transforms (ippGEN) domain functions are legacy. This domain was generated by the Spiral tool*. This domain won't be optimized for new architectures (the latest optimizations are targeted for Intel® Advanced Vector Extensions)and any newly detected performance and stability issues won't be fixed.

Here are some alternatives to substitute ippGEN functionality used in your application:

  •  Alternative Intel® IPP functions
  • Alternative open-source libraries

The alternative Intel® IPP functions
The ippGEN domain is a part of Intel® IPP that operates on one-dimensional signals for applications that require maximum performance. This domain provides the C programming language interfaces for several fixed-length linear transforms like Hartley transform (DHT), Walsh-Hadamard (or Hadamard) transform (WHT), discrete cosine transform (DCT-IV) and discrete Fourier transform (DFT).

Intel® IPP signal processing (ippSP) domain provides the substitutions for DCT and DFT functionalities. The ippsDFT and ippsDCT functionalities with arbitrary vector length can be effectively used as an alternative for the fixed-length transform functions in ippGEN domain.  The DHT and WHT functions are not included in the single processing domain now.

The Intel® ippGEN domain only includes the optimization with Intel® Advanced Vector Extensions(Intel® AVX) instruction set, while ippSP domain functions include more optimization with Intel® Advanced Vector Extensions 2(Intel® AVX2), Intel® Advanced Vector Extensions 512(Intel® AVX-512) on Intel® Xeon® processors and Intel® AVX-512 on Intel® Xeon Phi™ coprocessors.  The DFT functions in ippGEN domain were only optimized for the small vector length from 1 to 64. The DFT functions in signal processing domain provide highly optimized solution for arbitrary vector length from 1 to up 2^28, depending on data types(real,complex, 32 bit float, or 64 bit double).

Some open source options
Some open-source libraries, for example the FFTW* libraries, or the Spiral tool for generating, could be the options for the Hartley transform, Walsh-Hadamard,or Walsh-Hadamard transform.  Check the table below on a summary on ippGEN domain alternatives.

ippGEN Functions

Alternative Suggestion

Hartley transform (DHT)

Spiral, FFTW, etc.

Walsh-Hadamard  transform (WHT)

Spiral, FFTW, etc.

discrete cosine transform (DCT-IV)

ippsDCT (using related GetSize, Init, Fwd and Inv transform interfaces)

discrete Fourier transform (DFT)

ippsDFT (using related GetSize, Init, Fwd and Inv transform interfaces)

 

If you have any problems on moving to the new versions of Intel® IPP, feel free to contact us by Intel® Premier Support , or post your questions at Intel® IPP forum.

 

Facebook Connect Plugin (Android)

$
0
0

Introduction:

While building applications in Intel XDK this article provides step by step instructions to include Facebook Connect  Plugin for Android devices to inorder to enable Facebook functionalities in your application like Login, Logout, Get status, Show a dialog etc. 

Requirements:

  1. Android Device(Testing).
  2. Intel App Preview Installed on your Android Device.
  3. The Intel XDK installed on your workstation.

To include Facebook Plugin follow the below mentioned steps:

  1. Create a new project by clicking start a new project.
  2. Select HTML + Cordova. Check the box Use App Designer.
  3. If you hit Continue a new window will pop up which consists of two fields Project directory and Project name. Once you provide the project name click Create.
  4. Select App Framework.
  5. To use this plugin you will need to make sure you've registered your Facebook app with Facebook and have an APP_ID in short register the app to get app id.
  6. Now go to list of projects and select the newly created project.
  7. Enlarge the plugin section, go towards feature and custom Cordova plugins and select Facebook Connect.
  8. A window will pop up prompting for Facebook App Id and Facebook App name fill in these fields.
  9. Once you will click save you can see the plugins are included.

 

How to get Facebook App Id?

  1. Visit:  https://developers.facebook.com/apps and login with your Facebook credentials. (If you don’t have one you can sign up for).
  2. Now click Add a New App and select Android.
  3. In text space enter the name of your app and hit Create New Facebook App ID.
  4. And after entering few selections like Category of the app and all you will be able to click Create App ID.
  5. If you want to see your App ID number you can simply select Skip Quick Start.

 

Test - To verify that the included plugin works. Let’s try out sample.

  1. Download Sample Zip from: https://github.com/Wizcorp/phonegap-facebook-plugin and unzip the entire folder.NOTE: No need to install Cordova CLI because it comes along with the Intel XDK listed in the featured plugins.
  2. Go back to XDK and Start a new project and Select Import your HTML5 code base.
  3. On the right hand side of the window you can click to the file Icon and select from which location file needs to be imported, hit continue.
  4. Provide a project name and hit create and then continue.
  5. Go to Index.html and copy the app Id in this following line of code:

var appId = "Enter FB Application ID";

  1. Now build your app and run and you will be able to see bunch of buttons like Login with Facebook and so on. This signifies that Plugin can perform Facebook functionalities and as it is running the sample by referencing that you can run own project.

NOTE: Instead of Testing or Debugging try to build your Application. Testing on device will successfully show the expected behavior instead of testing on Emulator.

Screenshot of the Sample App

Let's build our own app which has Login functionality for Facebook:

  1. For this just follow the  above instruction  for Including Facebook Plugin from step 1-9.
  2. Click the develop tab and select the Design view.( I have included a header which displays app name and button which triggers login action).
  3. From the code view switch to the code view. 
  4. Now on the left side bar there are several files listed, open the index.html and include this function and call that function as mentioned below.
    <div class="upage vertical-col panel" id="mainpage" data-header="af-header-2" data-footer="none"><div class="event listening button" onclick="login();">Login with facebook</div>// Create a button, which is when clicked calls Login function
    
    
    
    var login = function () {
                    if (window.cordova.platformId == "browser") {
                        var appId = "Your app Id"
                        facebookConnectPlugin.browserInit(appId);
                        alert(JSON.stringify(facebookConnectPlugin))
                    }
                    facebookConnectPlugin.login( ["email"],
                        function (response) { alert(JSON.stringify(response)) },
                        function (response) { alert(JSON.stringify(response)) });
                }
  5.   Now build your app and run on the device.

  6. Screenshots of the app.

Good Luck & Enjoy!

References

 

 

 

 

 

 

Getting Started on Intel® RealSense™ Technology for Android*

$
0
0

Download PDF

The goal of this article is to help you quickly setup and start developing with Intel® RealSense™ technology on devices running Android*.

Pre-Work (Infrastructure)

There are two ways you can set up your Android development environment.

First Method (DYI, or do it yourself):

1) Download and install a compatible version of Java* SE Development Kit (JDK). Then you will need to set environment variables for the Java installation

2) Download and install the Android SDK tools and AVD. Again you will need to add all the tool directories (such as <path/to/ADT-bundle>/sdk/tools, <path/to/ADT-bundle>/sdk/platform-tools, <path/to/ADT-bundle>/sdk/build-tools) to your environment search paths.

3) If you want to use an IDE for Android to simplify your development work, you will need to find an IDE that has an Android plugin and install it.

4) Then you will need to download and install the Android plugin for your IDE. After that you will need to configure the plugin so it can access your Android SDK installation.

5) Finally if you want to use the Android emulator, you will need to set up the AVD accordingly.

Going through all of the above steps for the first time is a time consuming and frustrating experience for new developers. The complex array of tools and SDKs can quickly derail even the most productive developers. However, an easier way is available (see below).

Second (Recommended Method):

The Intel® Integrated Native Developer Experience (Intel® INDE) automates all of the steps above in the click of a few buttons. All you have to do is select your favorite IDE (this article guides you with Eclipse*) and the Intel INDE installation will take care of the rest by installing the IDE and all of the above tools and SDKs. The installation provides you with a pre-configured Android development environment that is ready for your first Hello World app. This document guides you in installing Intel INDE with Eclipse IDE integration.

You will need to have a host computer that is a 64-bit Intel® architecture-based, Windows*/Apple OS X* system with 4GB of available memory, and either 4.5 GB of disk space for Starter and Professional Editions or 5.5GB for the Ultimate Edition.

1) You can visit the Intel INDE "try & buy" page, which lists three Intel INDE editions (Starter, Professional, and Ultimate). The Starter edition is the free and most basic version, but includes everything you need to start developing Intel RealSense apps on Android. Professional and Ultimate editions contain additional Intel® software, and a free trial version of the Ultimate edition is also available.

After you have selected an edition, download and run the installer (Figure 1).

Figure 1
Figure 1.

2) Then you will be prompted with the screen shown in Figure 2.

After loading Intel INDE press "Next"

Figure 2
Figure 2.

3) When you proceed to the next screen (Figure 3) you will be asked to choose the IDE, and you are going to choose Eclipse.

Then choose "Android studio." (In this article)

Figure 3
Figure 3.

4) The screen shown in Figure 4 allows you to choose components you want to be installed with Eclipse. Unless you are already know the specific settings you need, it is advised to proceed with the installation choosing the default settings in each step of the installation.

Then choose the memory reservation level. And press "Next."

Figure 4
Figure 4.

5) In the end press "Finish".

After the Intel INDE installation is complete, you will have a shortcut to the Eclipse (which was installed as a result of the Intel INDE installation), called 'Eclipse for Android ADT' on your desktop. Now your Android development environment is completely set up and you are ready to develop Android apps using Eclipse and test them out with the emulator on Android phones and tablets.

That's it!

To develop with Intel® RealSense™ you have to install the add-on on your host system, as instructed in the next section. Here, the "Professional" version of Intel INDE was used.

Installing the Android Add-on for Intel® RealSense™ Technology

Now we're ready to install the add-on for Intel RealSense technology.

1) You can find the add-on here.

2) Then unpack it to a directory on your system, such as //path_to_rs/.

3) Run Eclipse and start the Android SDK Manager.
Go to Tools -> Manage Add-on Sites… (Figure 5).

Figure 5
Figure 5.

4) In the pop-up window, select User Defined Sites and click the «New button». A small pop-up window named Add Add-on Site URL will appear (Figure 6).

Figure 6
Figure 6.

5) Provide a path to the add-on.xml:
file:///path_to_rs/add-on.xml

Click OK, then Close.

6) Click Deselect All at the bottom-middle of the SDK Manager window to deselect all pre-selected packages. Then locate and expand the corresponding Android package in the SDK Manager and select the Intel RealSense…. Click Install xxx Package… button located in the bottom-right of the window.

If you agree to the terms of license, click Accept. In the next window click Install.

That's it.

You can verify the Intel RealSense technology with add-on installation by typing 'android list target' in the system console (use the Command Prompt in Windows or the Terminal in Apple OS X). This command lists all available Android targets, and among them you should see Intel RealSense SDK:XXX.

Once the add-on is installed, a copy of it will be created in the <Android SDK directory>/add-ons/addon-intel_realsense_sdk-intel-XXX. In a default Intel INDE installation with Eclipse IDE, <Android SDK directory> is located at C:\Intel\INDE\IDEintegration\ADT\sdk\add-ons\addon-intel_realsense_sdk-intel-XXX. In that directory, you will find the following sub-directories:

  • docs – Documentation for the add-on
  • extras – Templates to use for camera access
  • libs – Explained in the next paragraph
  • samples – Sample Android projects illustrating the usage of the Intel RealSense technology for Android add-on

The libs sub-directory contains the libraries that are needed to utilize the Camera V2 API (framework.jar) and Intel® RealSense™ 3D Camera API (com.intel.camera2.extensions.depthcamera.jar). When you develop apps using the Intel RealSense technology for Android add-on (which is based on these two APIs) you should include these libraries in your projects. These two libraries are already included in sample apps that come with the add-on.

Software Updates

When future releases of the add-on become available, all you have to do is delete all the contents in <some directory> and place the new addon.xml and intel_realsense_sdk.zip there. Then open the Android SDK Manager and you will notice the status of the Intel® RealSense™ SDK is changed to Update available: rev, xx. Simply select it and install as you did originally.

About the Author

Stanislav Pavlov works in the Software & Services Group at Intel Corporation. In his 10+ years of experience in technology, he has focused on performance optimization, power consumption, and parallel programming. In his current role as a Senior Application Engineer providing technical support for Intel® processor-based devices, Stanislav works closely with software developers and SoC architects to help them achieve the best possible performance on Intel® platforms. Stanislav holds a Master's degree in Mathematical Economics from the National Research University Higher School of Economics. He is currently pursuing an MBA in the Moscow Business School.

9-Patch Images for Android* Splash Screen

$
0
0

The source code for this sample can be found here: https://github.com/gomobile/sample-ninepatch-splashscreen, or download the Intel® XDK to check out all the HTML5 Samples.

This sample demonstrates how to use 9-patch PNG images for Android* splash screens within Intel XDK. Developers must keep in mind that their app should cater to different screen sizes and orientations. Android* apps can solve this problem by using 9-patch PNG images that can have stretchable areas defined such that the image can be stretched without compromising the end result.

What are 9-patch images?

Nine-patch graphics is a stretchable bit map image. Android* automatically resizes your 9-patch image to accommodate changes in resolution and specific layout constraints. When you create a 9-patch image, you can configure how the image should be stretched if it needs to be resized. It must be saved with the extension ‘.9.png’.

When should they be used?

9-patch images can be used for button backgrounds, page backgrounds, splash screens etc. For example, buttons must stretch to accommodate strings of various lengths. Splash screens often contain images or text that look contorted and pixilated on resolution changes.


9-patch Tutorial using Draw 9-patch tool for Intel XDK splash screen

 

Create four PNG images of your splash screen:

Due to varied screen sizes and resolutions of Android* devices in the market, Android* has separated all of its screen sizes into 4 distinct screen densities:

  • Low Density (ldpi ~ 120dpi)
  • Medium Density (mdpi ~ 160dpi)
  • High Density (hdpi ~ 240dpi)
  • Extra-High Density (xhdpi ~ 320dpi) 

Note: These dpi values are approximations, since custom built devices will have varying dpi values.

You will have to create 4 png images of your desired splash screen for each of the following screen density:

Density                                            Width                                            Height                                           
ldpi (Small)320426
mdpi (Medium)320470
hdpi (Large)480640
xhdpi (XLarge)720960

Note: The splash screen resolutions specified above, and on the Icons and Splash Screens section of the Projects tab, are minimum recommended dimensions, not absolute required dimensions. 

Differing screen sizes, widescreen, orientation are all factors that contribute to the image stretching to fit the screen. So your only options are to either create an image for each screen size/density combination, or create four 9-patch images. Once these four images have been converted to 9-patch, Android* will select the appropriate file for the device’s image density and then stretch the image according to the the device standard. If you choose to design a splash screen for every single resolution, you can start by following the resolutions in the table at the end of this page.

Note: If you choose to create just one 9-patch image of size 720x960, you have to note that 9-patch images only stretch and not shrink. While it may look great on a tablet, displaying it on a standard mobile device of size 320x470, text and other elements will look very small and lose legibility. To ensure that no shrinking will occur, you have to design in the lowest common resolution for each density category.

In the sample, you will find two sets of images (NinePatchSplashScreen/www/images) – four original images ending with .png and their 9-patch counterparts ending with .9.png.

 

Convert each PNG splash screen image to 9-patch:

We are going to be using the Draw 9-patch tool from Android* to do this.

Note: Any PNG image editor that can mark pixels in a transparent color can be used.

Step 1: Launch Tool. Draw9patch.bat is part of Android* SDK and can be launched from the sdk/tools folder. 

Step 2: Import your PNG image to the tool by either dragging it into the window or locate the file using Ctrl+O.

           

Step 3: Analyze your image and decide which areas you want to preserve (make non-scalable) and which areas you would like to stretch (will distort).

Here is what the original image looks like:

    

Here is how it will look like when the device stretches it vertically and horizontally (this can be viewed on the preview area – right pane of the tool)

                   

We want the HTML5 logo to be centered in the middle, the ‘android’ text above the logo and the ‘Nine Patch Image Test Loading…’ text at the bottom of the image. This leaves the areas in grey open for stretching:

Step 4: Specify areas for stretching in Draw 9-patch tool.

Check the ‘Show lock’ Checkbox (found at the bottom pane of the tool). When you mouse over, it will show you the non-drawable area of the image. You will find a small one pixel perimeter around the image where you can draw a line to specify areas you want to stretch.

          

Check the ‘Show patches’ checkbox which shows the stretchable patches in pink on the image. Click within the 1 pixel perimeter to draw lines that define the stretchable patches. Draw lines only on the top and left borders. The pink patch areas will be stretched if the splash screen needs to be resized.

   

                                             

On the preview pane, you can see how your 9-patch image now looks when stretched vertically and horizontally:

          

Much better.

Step 5: Save 9-patch image (File > Save 9-patch) which would save it with the .9.png extension. Android* will identify your image as a nine-patch graphic using this extension so do not change it.

Step 6: Create three more 9-patch images for your other image sizes.

  • small (ldpi): 320x426
  • medium (mdpi): 320x470
  • large (hdpi): 480x640
  • xlarge (xhdpi): 720x960

Step 7: Add your 9-patch images to your project directory from which Intel XDK is reading from. Ensure that the Cordova* Splash screen plug-in has been selected from the projects tab.

Go to the Projects tab of your app > Cordova* Hybrid Mobile App Settings > Launch Icons and Splash screens > Add your splash screens by using the little folder icon to locate them.

In init-app.js, your auto generated code will call the hideSplashScreen() method.

app.hideSplashScreen() ;    // after init is good time to remove splash screen

The splash screen plug-in removes the splash screen after the default timeout or the hideSplashScreen() function is called, whichever comes first. You can increase the default timeout via the intelxdk.config.additions.xml file:

<!-- "value" is the minimum time, in milliseconds, to show the launch or splash screen --><preference name="SplashScreenDelay" value="2000" />

In the sample, the hideSplashScreen method has been commented to display the splash screen for a longer period of time.

The ‘Show Splash Screen’  button will display the splash screen for 5 seconds.

You can modify the duration by changing the timeout in milliseconds in index_user_scripts.js:

if (navigator.splashscreen) {
    navigator.splashscreen.show();
    setTimeout(function () {
        navigator.splashscreen.hide();
    }, 5000);
}

Run the app on a variety of devices to see how different screen sizes and aspect ratios change the splash screen to see the difference nine-patch images make.

Note: You can run this app in the Emulator or using the Debug and Test tabs, but you will not see the custom splash screens. You must build and install the app on an Android* device to see the custom splash screen.


References:

Using Intel® IPP threaded static libraries

$
0
0

Q: How to get Intel® IPP Static threaded libraries?

Answer: while installing Intel Software suite product (Intel® Parallel Studio or Intel® System Studio or Intel® INDE), select custom installation to get option to select threaded libraries.

To select right package of threaded libraries, right click and enable ‘Install’ option.

After selecting threaded libraries, selection option will get highlighted with  mark and memory requirement for threaded libraries will get highlighted.

Q: Where I can find threaded static libraries in my installation?

Answer: After installing threaded libraries as mentioned in the above mentioned steps, internally threaded files will be in the following directory

<ipp directory>/lib/<arch>/threaded

Windows* OS: mt suffix in a library name (ipp<domain>mt.lib)

Linux* OS and OS X*: no suffix in a library name (libipp<domain>.a)

Q: How to set path to single threaded or multi-threaded library in system variable or in project?

Answer:

Windows* OS:

Single-threaded: SET LIB=<ipp directory>/lib/<arch>

Multi-threaded: SET LIB=<ipp directory>/lib/<arch>/threaded           

Linux* OS/OS X*

Single-threaded: gcc <options> -L <ipp directory>/lib/<arch>

Multi-threaded: gcc <options> -L <ipp directory>/lib/<arch>/threaded

Q: Is it recommended to use threaded static libraries?

Answer: It is strongly recommended to use the single-threaded version of the libraries for new development. Internally threaded (multi-threaded) versions of Intel® IPP libraries are deprecated but made available for legacy applications

Q : How can I control threading behavior in the threaded static libraries?

Answer: Intel IPP implements multi-threading optimization with OpenMP* directives. Users can choose either OpenMP* environment variables (e.g OMP_NUM_THREADS) or Intel IPP threading APIs to control the threading behavior. Please refer to IPP Threading/OpenMP* FAQ page for further information.

 

Please let us know if you have any feedback on deprecations via the feedback URL

Threading Intel® Integrated Performance Primitives Image Resize with Intel® Threading Building Blocks

$
0
0

Threading Intel® IPP Image Resize with Intel® TBB.pdf (157.18 KB) :Download Now

 

Introduction

The Intel® Integrated Performance Primitives (Intel® IPP) library provides a wide variety of vectorized signal and image processing functions. Intel® Threading Building Blocks (Intel® TBB) adds simple but powerful abstractions for expressing parallelism in C++ programs. This article presents a starting point for using these tools together to combine the benefits of vectorization and threading to resize images.   

From Intel® IPP 8.2 onwards multi-threading (internal threaded) libraries are deprecated due to issues with performance and interoperability with other threading models, but made available for legacy applications. However, multithreaded programming is now main stream and there is a rich ecosystem of threading tools such as Intel® TBB.  In most cases, handling threading at an application level (that is, external/above the primitives) offers many advantages.  Many applications already have their own threading model, and application level/external threading gives developers the greatest level of flexibility and control.  With a little extra effort to add threading to applications it is possible to meet or exceed internal threading performance, and this opens the door to more advanced optimization techniques such as reusing local cache data for multiple operations.  This is the main reason to start deprecating internal threading in the latest releases.

Getting started with parallel_for

Intel® TBB’s parallel_for offers an easy way to get started with parallelism, and it is one of the most commonly used parts of Intel® TBB. Any for() loop in the applications, where  each iteration can be done independently and the order of execution doesn’t matter.  In these scenarios, Intel® TBB parallel_for is useful and takes care of most details, like setting up a thread pool and a scheduler. You supply the partitioning scheme and the code to run on separate threads or cores. More sophisticated approaches are possible. However, the goal of this article and sample code is to provide a simple starting point and not the best possible threading configuration for every situation.

Intel® TBB’s parallel_for takes 2 or 3 arguments. 

parallel_for ( range, body, optional partitioner ) 

The range, for this simplified line-based partitioning, is specified by:

blocked_range<int>(begin, end, grainsize)

This provides information to each thread about which lines of the image it is processing. It will automatically partition a range from begin to end in grainsize chunks.  For Intel® TBB the grainsize is automatically adjusted when ranges don't partition evenly, so it is easy to accommodate arbitrary sizes.

The body is the section of code to be parallelized. This can be implemented separately (including as part of a class); though for simple cases it is often convenient to use a lambda expression. With the lambda approach the entire function body is part of the parallel_for call. Variables to pass to this anonymous function are listed in brackets [alg, pSrc, pDst, stridesrc_8u, …] and range information is passed via blocked_range<int>& range.

This is a general threading abstraction which can be applied to a wide variety of problems.  There are many examples elsewhere showing parallel_for with simple loops such as array operations.  Tailoring for resize follows the same pattern.

External Parallelization for Intel® IPP Resize

A threaded resize can be split into tiles of any shape. However, it is convenient to use groups of rows where the tiles are the width of the image.

Each thread can query range.begin(), range.size(), etc. to determine offsets into the image buffer. Note: this starting point implementation assumes that the entire image is available within a single buffer in memory. 

The new image resize functions in Intel® IPP 7.1 and later versions, new approach has many advantages like

  • IppiResizeSpec holds precalculated coefficients based on input/output resolution combination. Multiple resizes which can be completed without recomputing them.
  • Separate functions for each interpolation method.
  • Significantly smaller executable size footprint with static linking.
  • Improved support for threading and tiled image processing.
  • For more information please refer to article : Resize Changes in Intel® IPP 7.1

Before starting resize, the offsets (number of bytes to add to the source and destination pointers to calculate where each thread’s region starts) must be calculated. Intel® IPP provides a convenient function for this purpose:

ippiResizeGetSrcOffset

This function calculates the corresponding offset/location in the source image for a location in the destination image. In this case, the destination offset is the beginning of the thread’s blocked range.

After this function it is easy to calculate the source and destination addresses for each thread’s current work unit:

pSrcT=pSrc+(srcOffset.y*stridesrc_8u);
pDstT=pDst+(dstOffset.y*stridedst_8u);

These are plugged into the resize function, like this:

ippiResizeLanczos_8u_C1R(pSrcT, stridesrc_8u, pDstT, stridedst_8u, dstOffset, dstSizeT, ippBorderRepl, 0, pSpec, localBuffer);

This specifies how each thread works on a subset of lines of the image. Instead of using the beginning of the source and destination buffers, pSrcT and pDstT provide the starting points of the regions each thread is working with. The height of each thread's region is passed to resize via dstSizeT. Of course, in the special case of 1 thread these values are the same as for a nonthreaded implementation.

Another difference to call out is that since each thread is doing its own resize simultaneously the same working buffer cannot be used for all threads. For simplicity the working buffer is allocated within the lambda function with scalable_aligned_malloc, though further efficiency could be gained by pre-allocating a buffer for each thread.

The following code snippet demonstrates how to set up resize within a parallel_for lambda function, and how the concepts described above could be implemented together.  

 Click here for full source code.

By downloading this sample code, you accept the End User License Agreement.

parallel_for( blocked_range<int>( 0, pnminfo_dst.imgsize.height, grainsize ),
            [pSrc, pDst, stridesrc_8u, stridedst_8u, pnminfo_src,
            pnminfo_dst, bufSize, pSpec]( const blocked_range<int>& range )
        {
            Ipp8u *pSrcT,*pDstT;
            IppiPoint srcOffset = {0, 0};
            IppiPoint dstOffset = {0, 0};

            // resized region is the full width of the image,
            // The height is set by TBB via range.size()
            IppiSize  dstSizeT = {pnminfo_dst.imgsize.width,(int)range.size()};

            // set up working buffer for this thread's resize
            Ipp32s localBufSize=0;
            ippiResizeGetBufferSize_8u( pSpec, dstSizeT,
                pnminfo_dst.nChannels, &localBufSize );

            Ipp8u *localBuffer =
                (Ipp8u*)scalable_aligned_malloc( localBufSize*sizeof(Ipp8u), 32);

            // given the destination offset, calculate the offset in the source image
            dstOffset.y=range.begin();
            ippiResizeGetSrcOffset_8u(pSpec,dstOffset,&srcOffset);

            // pointers to the starting points within the buffers that this thread
            // will read from/write to
            pSrcT=pSrc+(srcOffset.y*stridesrc_8u);
            pDstT=pDst+(dstOffset.y*stridedst_8u);


            // do the resize for greyscale or color
            switch (pnminfo_dst.nChannels)
            {
            case 1: ippiResizeLanczos_8u_C1R(pSrcT,stridesrc_8u,pDstT,stridedst_8u,
                        dstOffset,dstSizeT,ippBorderRepl, 0, pSpec,localBuffer); break;
            case 3: ippiResizeLanczos_8u_C3R(pSrcT,stridesrc_8u,pDstT,stridedst_8u,
                        dstOffset,dstSizeT,ippBorderRepl, 0, pSpec,localBuffer); break;
            default:break; //only 1 and 3 channel images
            }

            scalable_aligned_free((void*) localBuffer);
        });
 

As you can see, a threaded implementation can be quite similar to single threaded.  The main difference is simply that the image is partitioned by Intel® TBB to work across several threads, and each thread is responsible for groups of image lines. This is a relatively straightforward way to divide the task of resizing an image across multiple cores or threads.

Conclusion

Intel® IPP provides a suite of SIMD-optimized functions. Intel® TBB provides a simple but powerful way to handle threading in Intel® IPP applications. Using them together allows access to great vectorized performance on each core as well as efficient partitioning to multiple cores. The deeper level of control available with external threading enables more efficient processing and better performance. 

Example code: As with other  Intel® IPP sample code, by downloading you accept the End User License Agreement.


Intel® IPP - Threading / OpenMP* FAQ

$
0
0

In Intel® IPP 8.2 and later versions, multi-threading (internal threading) libraries are deprecated due to issues with performance and interoperability with other threading models, but made available for legacy applications. Multi-threaded static and dynamic libraries are available as a separate download to support legacy applications. For new applications development, highly recommended to use the single-threaded versions with application-level threading (as  shown in the below picture).

Intel® IPP 8.2 and later versions installation will have single threaded libraries in the following directory Structure

<ipp directory>lib/ia32– Single-threaded Static and Dynamic for IA32 architecture

<ipp directory>lib/intel64 - Single-threaded Static and Dynamic for Intel 64 architecture

Static linking (Both single threaded and Multi-threaded libraries)             

  • Windows* OS: mt suffix in a library name (ipp<domain>mt.lib)
  • Linux* OS and OS X*: no suffix in a library name (libipp<domain>.a)

Dynamic Linking: Default (no suffix)

  • Windows* OS: ipp<domain>.lib
  • Linux* OS: libipp<domain>.a
  • OS X*: libipp<domain>.dylib

Q: Does Intel® IPP supports external multi-threading? Thread safe?

Answer: Yes, Intel® IPP supports external threading as in the below picture. User has option to use different threading models like Intel TBB, Intel Cilk Plus, Windows * threads, OpenMP or PoSIX. All Intel® Integrated Performance Primitives functions are thread-safe.

Q: How to get Intel® IPP threaded libraries?

Answer: While Installing Intel IPP, choose ‘custom’ installation option.  Then you will get option to select threaded libraries for different architecture.

To select right package of threaded libraries, right click and enable ‘Install’ option.

After selecting threaded libraries, selection option will get highlighted with  mark and memory requirement for threaded libraries will get highlighted.

Threading in Intel® IPP 8.1 and earlier versions

Threading, within the deprecated multi-threaded add-on packages of the Intel® IPP library, is accomplished by use of the Intel® OpenMP* library. Intel® IPP 8.0 continues the process of deprecating threading inside Intel IPP functions that was started in version 7.1. Though not installed by default, the threaded libraries can be installed so code written with these libraries will still work as before. However, moving to external threading is recommended.

Q: How can I determine the number of threads the Intel IPP creates?
Answer: You can use the function ippGetNumThreads to find the number of threads created by the Intel IPP.

Q: How do I control the number of threads the Intel IPP creates?
Ans: Call the function ippSetNumThreads to set the number of threads created.

Q: Is it possible to prevent Intel IPP from creating threads?
Ans: Yes, if you are calling the Intel IPP functions from multiple threads, it is recommended to have Intel IPP threading turned off. There are 3 ways to disable multi-threading:

  • Link to the non-threaded static libraries
  • Build and link to a custom DLL using the non-threaded static libraries
  • Call ippSetNumThread(1)

Q: When my application calls Intel IPP functions from a separate thread, the application hangs; how do I resolve this?

Ans: This issue occurs because the threading technology used in your application and in the Intel IPP (which has OpenMP threading) is incompatible. The ippSetNumThreads function has been developed so that threading can be disabled in the dynamic libraries. Please also check the sections above for other ways to prevent Intel IPP functions from creating threads.

Q: Which Intel IPP functions contain OpenMP* code?

Ans: "ThreadedFunctionsList.txt" file under ‘doc’ folder under product installation directory provide detailed list of threaded functions in Intel IPP Library. The list is updated in each release.

 

Please let us know if you have any feedback on deprecations via the feedback URL

 

How to enable SoCWatch on Nexus (FUGU) player

$
0
0

SoCWatch Introduction

Intel® SoC Watch is a command line tool for monitoring system behaviors related to power consumption on Intel® architecture-based platforms. It monitors power states, frequencies, bus activity, wakeups, and various other metrics that provide insight into the system’s energy efficiency.

After data collection, a summary file and raw data are produced by default on target system. The raw data (sw1) can be import to Intel Energy Profiler which is the same GUI as VTune Amplifier to correlate and visualize the system behavior over time. The summary file (csv) can be opened by Excel and make difference metric into graph for easier analysis.

Grant root permission from Nexus player

Nexus player is already release to market. The analysis tool need to grant the root permission performance data through kernel driver. You can follow this video tutorial steps: Nexus Player – How to Root Android TV in order to grant root permission. Once you have root permission, the device can recognize the su command in adb shell. 

Rebuild kernel and kernel configuration for Nexus player

Google disabled the module upload function in kernel configuration. For this reason, we need to download the kernel source from Google official website and rebuild the kernel only based on our customize config after modifying modify the kernel configuration.

Step1. Download the kernel source from Google official website

Step2. The kernel configuration must be configured with enabling the following options.

export ARCH=x86

make fugu_defconfig

make menuconfig
  • CONFIG_MODULES=y
  • CONFIG_MODULE_UNLOAD=y
  • CONFIG_TRACEPOINTS=y
  • CONFIG_FRAME_POINTER=y
  • CONFIG_COMPAT=y
  • CONFIG_TIMER_STATS=y
  • CONFIG_X86_ACPI_CPUFREQ=m (or CONFIG_X86_ACPI_CPUFREQ=y)
  • CONFIG_INTEL_IDLE=y

Step3. After build the kernel, the kernel file will be found under < INSTALLATION_DIR_PATH >\x86_64\arch\x86\boot\bzImage

make –j4

Step4. Build boot image with pre-built kernel

For Intel platform device, sometimes only flash kernel partition will be failed. Therefore, we will build one boot image with our pre-built favorite kernel inside. For this purpose, we can put the pre-built kernel inside the Android source tree and build the boot image only.

For quick solution, we use the unpack/repack boot image script to build the boot image which is Android Image Kitchen. First you can download the factory image from Google developers. After use unpackimg.bat to extract the boot.img and the replace the < INSTALLATION_DIR_PATH >\split_img\boot.img-zImage with the bzImage you built before. At the end, using repackimg.bat to repack the new boot.img.

NOTE: Once your device can’t reboot anymore due to wrong flash instruction, you can plug off and on the power and long-press the hard key to make it run into fastboot mode. And then, using flash-all.bat script which is include in the factory image you download from Google developers to flash all images and recovery it.

Step5. Flash the new boot.img to the device.

adb reboot bootloader

fastboot flash boot boot.img

fastboot reboot

Now, you can check the kernel version to see if the flash is success or not. If so, we can start to build the SoCWatch driver based on this kernel source.

Build the SoCWatch driver

The driver source is included in SoCWatch package which you can download from Intel® System Studio. The SoCWatch is one of components in Intel® System Studio.

Step1. Build socperf1_2.ko via build driver script in < INSTALLATION_DIR_PATH >\soc_perf_driver\src\

sh ./build-driver

Step2. Build SOCWATCH1_5.ko via build driver script < INSTALLATION_DIR_PATH >\socwatch_driver\lxkernel\

sh ./build-driver –k <KERNEl_BUILD_DIR> -s <KERNEl_BUILD_DIR>

Setup SoCWatch environment

You can execute the installation file (socwatch_android_install.bat) after grant root permission via adb root command. However, we can’t easily make adb run in root default. For this situation, we set up the SoCWatch environment step by step.

First step is to navigate to the SoCWatch directory and copy the necessary files into device. For this device, we only can push those files to sdcard location and then copy to /data/socwatch

tools\os2unix.exe setup_socwatch_env.sh

tools\dos2unix.exe SOCWatchConfig.txt

adb push socwatch /sdcard/socwatch/

adb push setup_socwatch_env.sh /sdcard/socwatch/

adb push libs /sdcard/socwatch/libs/

adb push valleyview_soc /sdcard/socwatch/valleyview_soc/

adb push tangier_soc /sdcard/socwatch/tangier_soc/

adb push anniedale_soc /sdcard/socwatch/anniedale_soc/

adb push socperf1_2.ko /sdcard/socwatch/

adb push SOCWATCH1_5.ko /sdcard/socwatch/

adb shell

su

cp –r /sdcard/socwatch /data/

cd /data/socwatch

chmod 766 socwatch

At the end you can refer the User Guide (see in attachment) for instructions to collect data. Once you got your output data after the data collection, you can pull the result file and have insight/analysis the target system with collected performance data in the host system.

The alternatives for Intel® IPP legacy Generated Transforms domain

$
0
0

Starting with Intel® Integrated Performance Primitives (Intel® IPP) 9.0, the Intel® IPP Generated Transforms (ippGEN) domain functions are legacy. This domain was generated by the Spiral tool*. This domain won't be optimized for new architectures (the latest optimizations are targeted for Intel® Advanced Vector Extensions)and any newly detected performance and stability issues won't be fixed.

Here are some alternatives to substitute ippGEN functionality used in your application:

  •  Alternative Intel® IPP functions
  • Alternative open-source libraries

The alternative Intel® IPP functions
The ippGEN domain is a part of Intel® IPP that operates on one-dimensional signals for applications that require maximum performance. This domain provides the C programming language interfaces for several fixed-length linear transforms like Hartley transform (DHT), Walsh-Hadamard (or Hadamard) transform (WHT), discrete cosine transform (DCT-IV) and discrete Fourier transform (DFT).

Intel® IPP signal processing (ippSP) domain provides the substitutions for DCT and DFT functionalities. The ippsDFT and ippsDCT functionalities with arbitrary vector length can be effectively used as an alternative for the fixed-length transform functions in ippGEN domain.  The DHT and WHT functions are not included in the single processing domain now.

The Intel® ippGEN domain only includes the optimization with Intel® Advanced Vector Extensions(Intel® AVX) instruction set, while ippSP domain functions include more optimization with Intel® Advanced Vector Extensions 2(Intel® AVX2), Intel® Advanced Vector Extensions 512(Intel® AVX-512) on Intel® Xeon® processors and Intel® AVX-512 on Intel® Xeon Phi™ coprocessors.  The DFT functions in ippGEN domain were only optimized for the small vector length from 1 to 64. The DFT functions in signal processing domain provide highly optimized solution for arbitrary vector length from 1 to up 2^28, depending on data types(real,complex, 32 bit float, or 64 bit double).

Some open source options
Some open-source libraries, for example the FFTW* libraries, or the Spiral tool for generating, could be the options for the Hartley transform, Walsh-Hadamard,or Walsh-Hadamard transform.  Check the table below on a summary on ippGEN domain alternatives.

ippGEN Functions

Alternative Suggestion

Hartley transform (DHT)

Spiral, FFTW, etc.

Walsh-Hadamard  transform (WHT)

Spiral, FFTW, etc.

discrete cosine transform (DCT-IV)

ippsDCT (using related GetSize, Init, Fwd and Inv transform interfaces)

discrete Fourier transform (DFT)

ippsDFT (using related GetSize, Init, Fwd and Inv transform interfaces)

 

If you have any problems on moving to the new versions of Intel® IPP, feel free to contact us by Intel® Premier Support , or post your questions at Intel® IPP forum.

 

Unity* Resource Center for x86 Support

$
0
0

 

Unity

This Unity* resource page on the Intel® Developer Zone is your central location for support of x86 within the Unity game engine. Check back often as this page will be updated frequently!

Adding x86 Support to Android* Apps Using the Unity* Game Engine

Enabling existing Unity* ARM*-based Android* SDK games with native x86 support is straightforward and easy. This document walks through the steps to produce a fat APK that includes both x86 and ARM libraries from within Unity 4.6 or Unity 5 versions
https://software.intel.com/en-us/android/articles/adding-x86-support-to-android-apps-using-the-unity-game-engine


Unite 2014 - Big Android: Best Performance on the Most Devices

 


Unity* Optimization Guide for x86 Android*

Download the PDF of Unity* Optimization Guide for x86 Android*

To get the most out of the Android* x86 platform there are a number of performance optimizations you can apply to your project that help to maximize performance. In this guide, we will show a variety of tools to use as well as features in the Unity* software that can help you enhance the performance of the  native x86 code. We will discuss how to handle items like texture quality, batching, culling, light baking, and HDR effects. Additionally, we also will show how to build an x86-specific binary for testing and other needs.

By the end of this guide you will be able to identify performance issues and what they are bound to, key optimizations, and methodologies for good game development in Unity. First we will go over some of the tools available that will make it easy to identify potential hot spots in your application.

NEWS

https://software.intel.com/en-us/blogs/2014/08/15/unity-android-support
We are pleased to announcedelivery of a piece that has been missing from the most popular game engine on the planet – support for Intel® architecture, including Intel graphics. As part of this announcement, Intel and Unity will be working together to deliver the following for the Unity3D game engine:

  • Native Android* support for Intel architecture in all versions of Unity3D*
  • Access to unique features of Intel graphics through Unity3D
  • Access to IA's new CPU instructions and threading support

Unity has supported x86 on Windows* for a long time, but this collaboration brings native x86 support to Android* as well.

This functionality was previewed at Unite* and will be available soon with the latest version of Unity* 4 and Unity* 5.  Once you have this new version of Unity, you will just need to open your existing project and create a new Android build.  This will automatically include native support for x86 in addition to ARM.  Your app will now have the best performance, optimized for both Intel and ARM-based devices.

Full Press Release: http://newsroom.intel.com/community/intel_newsroom/blog/2014/08/20/intel-and-unity-collaborate-to-extend-android-support-across-intel-based-devices


Missed Unite?

Big Android: Best Performance On The Most Devices

 Download the presentation

Thursday, August 21, 17:00 - 17:30, Norcliff Room

Over 1 billion people use an Android device daily. This presentation will examine common bottlenecks and performance issues that affect Unity games on Android. Attendees will learn the best methods for reaching the highest possible FPS on the largest range of devices. We will also look at tools for low level profiling and optimization of the CPU and GPU (for ARM and x86).


Unity/X86 Labs

Intel and Unity will be teaming up to provide technical assistance to developers at the following upcoming events:


ISV ENTHUSIASM

 

Sonic Dash

Intel and Unity have given a few software developers access to a very early version of the Unity code base that supports access to Intel graphics and CPU technology. Early indications are that this announcement will cause tremendous excitement in the game developer community. SEGA is one of the companies jumping on this opportunity quickly, having already added x86 support to their Unity-based Sonic Dash* title. Chris Southall, Studio Head of Hardlight has stated, "SEGA's Hardlight is one of the very first mobile studios to utilize the x86-enabled version of Unity in one of its games. We've seen impressive performance gains by 'going native' - it's been great working with Unity and Intel on this."



 

School of Dragons

Gaming companies, like Jumpstart*, want to release their software on as many platforms as easily as possible, while achieving better performance. Unity’s 4.6 release that provides native x86 support, is making these goals a reality. When Jumpstart applied the new Unity 4.5.4f1 version to their School of Dragons game, they achieved both a 146% frames per second speedup and an 87.6% lower CPU utilization simply by enabling Unity native x86 support.

Learn more


 

Unity is making native x86 app support easy with its latest 4.6 release. Square Enix quickly saw the benefits of supporting native x86 on Android, with their Hitman GO* release developers achieved a 31.2% faster game load time to gameplay simply by enabling Unity native x86 support.

Learn more

Depth Perception and Enhanced Digital Photography with Intel® RealSense™ Technology

$
0
0

Download Depth-Perception-Intel-RealSense.pdf

By adding the “real sense” of human depth perception to digital imaging, Intel® RealSense™ technology enables 3D photography, on mainstream tablets, 2-in-1s, and other RealSense technology enabled devices. These capabilities are based on extrapolating depth information from images captured by an array of three cameras, producing data for a 3D model that can be embedded into a JPEG photo file.

Software development kits and other developer tools from Intel will abstract depth perception processing to simplify the creation of applications without low-level expertise in depth processing. Devices that support this end-user functionality are available on the market now.

This article introduces software developers to the key mechanisms used by Intel® RealSense™ technology to implement depth perception in enhanced digital photography.

Encoding Data into a Depth Map

The third dimension in digital photography, as enabled by Intel RealSense technology, is capturing the relative distances between the camera and various elements within the scene. This information is stored in a depth map, which is conceptually similar to a topographic map, where a depth value (z-dimension) is stored for each pixel (x-y coordinate) in the image. Image capture to support depth mapping is accomplished using three camera sensors, as illustrated in Figure 1. Here, the 8 megapixel (MP) main image is augmented with information captured by two 720p Red, Green, and Blue (RGB) sensors.


Figure 1. RGB camera sensor array produces image with depth data.

The actual depth map is produced by computing the disparities between the positions of individual points in the images captured by the three cameras (based on parallax due to the physical separation of the cameras on the device). The disparity associated with each point in the scene is mapped onto a grayscale image. The smaller disparities are represented by darker pixels and are further away from the device. The larger disparities are represented by lighter pixels and are closer to the device. The main image has a higher resolution and can be used independently, or when needed by an application, the depth information can be used to model 3D space in the scene.

Resolution of the depth map is limited by the size of the image captured by the lowest resolution sensor (720p).  It may be saved as an 8-bit or 16-bit PNG file. Typically, the depth map file roughly doubles the overall size of the JPEG finished file.  The depth information itself is stored along with the main image in a single JPEG file.  The JPEG is compatible with standard image viewers. However, when viewed on an Intel RealSense 3D Camera enabled system, the depth information is also retrieved for use by various RealSense apps.

Quality of the depth map is dependent on a number of factors, including the following:

  • Distance from camera to subject. Distances between 1 to 10 meters provide optimal depth experience with 1 to 5 meters providing the best measurement experience.
  • Lighting. Dimly lit scenes require higher ISO equivalents, which can produce sensor noise and interfere with distance calculations; glare and reflective surfaces can also adversely affect depth images.
  • Texture and contrast. Clear visual distinctions between elements in a scene—as opposed to solid masses of color or busy geometric patterns—help provide for dependable outcomes of depth algorithms.

Hardware and Use Cases

Depth photography is currently available using the Intel RealSense R100 rear-facing three-camera array as featured in the Dell Venue 8 7840 Android tablet. At only 6 millimeters (less than 1/4”) thick and approximately 300 grams (0.7 pounds), this Venue tablet is powered by the 2.3 GHz Intel® Atom™ processor Z3580 and provides an 8.4-inch OLED display with 2560 x 1600 resolution.

One common use case for depth mapping in real-world applications is to produce accurate measurements of objects in a photographed scene AFTER the image has been captured.  This is accomplished using the 3D data within the depth map. To illustrate this concept in a light-hearted way, Intel created the “Fish Demo,” as shown in Figure 2, where two friends display the fish they have caught.


Figure 2. Intel® RealSense™ technology dispels a false fish story using actual measurements.

While one of the two men has caught the smaller fish (11 inches, compared to his friend’s fish that is 3 feet and 1 inch long), he crowds in closer to the camera, making his catch appear larger in a conventional photograph. In this demonstration, the Measurement application allows for actual measurements of each fish to be taken with a simple tap of the screen on the head and tail of each, and the actual measurements are superimposed over the image.

A broad range of similar use cases are possible. Parents could document the growth of their children in a digital photo album as opposed to marking up their door frames. Shopping for furniture could be simplified by identifying how pieces in the showroom would fit into the living room back at home. For further illustration, consider the series of television commercials featuring Jim Parsons including the scene in Figure 3, where he explains to a stunt-bike rider how measurements ahead of time using Intel RealSense technology could have made a bike jump successful.


Video at www.youtube.com/watch?v=SFo3Mf0lsvw
Figure 3. Jim Parsons suggests preparing for a bike stunt using Intel® RealSense™ technology.

About the Author

Kyle Mabin has been with Intel for 22 years and is a Technical Marketing Engineer with SSG’s Developer Relations Division. He is based in Chandler, AZ.

Learn more about Intel® RealSense Technology:
www.intel.com/software/realsense

Intel® System Studio 2016 Beta - What's New

$
0
0

 

Intel® System Studio 2016 Beta  provides deep hardware and software insights to speed-up development, testing and optimization of Intel-based IoT, Intelligent Systems, mobile systems and embedded systems. Intel® System Studio 2016 Beta have added exciting new features such as enhanced Intel® Quark SoC, Edsion and SoFIA support, improved Eclipse* integration, Wind River* Workbench* integration and Native code generation support for Intel® Graphics Technology on Linux* targets.

We also introduce Intel® System Studio 2016 for Windows* Beta target with Microsoft* Visual Studio* integration. It adds support for Microsoft* Windows* targeted cross-development Microsoft* Windows* 7 and 8.1 releases. Additionally, in the Professional Edition it adds remote performance, power and thermal analysis. It is intended for use on Microsoft* Windows* host operating systems with the intention of deploying build results and doing sampling analysis on Microsoft* Windows* and Microsoft* Windows* Embedded target.

What's New in Intel® System Studio 2016 Beta

Component /Item

What’s new

New Platform support for latest Airmont, Intel® Quark™, Edison and SoFIA by various components.

Please check with us for early access to upcoming Processor support under non-disclosure agreement.

Use Intel® System Studio to develop system software and debug for all upcoming mobile embedded platforms

Intel® C++ Compiler

Support and optimizations for

  • Enhanced C++11 feature support
  • Enhanced C++14 feature support
  • FreeBSD* support
  • Added support for Red Hat Enterprise Linux* 7
  • Deprecated Red Hat Enterprise Linux* 5.

Intel® VTune™ Amplifier for Systems

  • Basic hotspots, Locks & Waits and EBS with stacks for RT kernel and RT application for Linux Targets
  • EBS based stack sampling for kernel mode threads
  • Support for Intel® Atom™ x7 Z8700 & x5 Z8500/X8400 processor series (Cherry Trail) including GPU analysis
  • KVM guest OS profiling from host based on Linux Perf tool
  • Support for analysis of applications in virtualized environment (KVM). Requires Linux kernels > 3.2 and Qemu version > 1.4
  • Automated remote EBS analysis on SoFIA  (by leveraging existing sampling driver on target)
  • Super Tiny display mode added for the Timeline pane to easily identify problem areas for results with multiple processes/threads
  • Platform window replacing Tasks and Frames window and providing CPU, GPU, and
  • Bandwidth metrics data distributed over time
  • General Exploration analysis views extended to display confidence indication (greyed
  • out font) for non-reliable metrics data resulted, for example, from the low number of collected samples
  • GPU usage analysis for OpenCL™ applications extended to display compute-originated batch buffers on the GPU software queue in the Timeline pane (Linux* target only)
  • New filtering mode for command line reports to display data for the specified column names only

Intel® Inspector for Systems

  • Added support for DWARF Version 4 symbolics.
  • Improved custom install directory process.
  • For winodows,
    • Added limited support for memory growth when analyzing applications containing Windows* fibers.

GDB* - The GNU Debugger

  • GDB Features
    • The version of GDB provided as part of Intel® System Studio 2016 is based on GDB version 7.8. Notably, it contains the following features added by Intel:
  • Data Race Detection (pdbx):
    • Detect and locate data races for applications threaded using POSIX* threads
  • Branch Trace Store (btrace):
    • Record branches taken in the execution flow to backtrack easily after events like crashes, signals, exceptions, etc.
  • Pointer Checker:
    • Assist in finding pointer issues if compiled with Intel® C++ Compiler and having
    • Pointer Checker feature enabled (see Intel® C++ Compiler documentation for more information)
  • Intel® Processor Trace (Intel® PT) Support:
    • Improved version of Branch Trace Store supporting Intel® TSX. For 5th generation
  • Intel® Core™ Processors and later access it via command:
    • (gdb) record btrace pt
    • Those features are only provided for the command line version and are not supported via the Eclipse* IDE Integration.

 

Intel® Debugger for Heterogeneous Compute 2016 Features
The version of Intel® Debugger for Heterogeneous Compute 2016 provided as part of Intel® System Studio 2016 uses GDB version 7.6. It provides the following features:

  • Debugging applications containing offload enabled code to Intel® Graphics Technology
  • Eclipse* IDE integration

Intel® System Debugger

  • Support for Intel® Atom™ x7 Z8700 & x5 Z8500/X8400 processor series (Cherry Trail)
  • Several bug fixes and stability improvements

Intel® Threading Building Blocks

 

  • Added a C++11 variadic constructor for enumerable_thread_specific.
  • The arguments from this constructor are used to construct thread-local values.
  • Improved exception safety for enumerable_thread_specific.
  • Added documentation for tbb::flow::tagged_msg class and tbb::flow::output_port function.
  • Fixed build errors for systems that do not support dynamic linking.
  • C++11 move aware insert and emplace methods have been added to concurrent unordered containers

 

Product Contents of Intel® System Studio 2016 Beta for Windows*

The product contains the following components

  1. Intel® C++ Compiler 16.0 Beta
  2. Intel® Integrated Performance Primitives 9.0 Beta
  3. Intel® Math Kernel Library 11.3 Beta
  4. Intel® Threading Building Blocks 4.3 Update 4
  5. Intel® System Studio System Analyzer, Frame Analyzer and Platform Analyzer 2014 R4
  6. Intel® VTune™ Amplifier 2016 Beta for Systems with Intel® Energy Profiler
    • Intel® VTune™ Amplifier Sampling Enabling Product (SEP) 3.15
    • SoC Watch for Windows* 1.8.1
  7. Intel® Inspector 2016 Beta for Systems

Product Contents of Intel® System Studio 2016 Beta 

The product contains the following components

  1. Intel® C++ Compiler 16.0 Beta
  2. Intel® Integrated Performance Primitives 9.0 Beta for Linux*
  3. Intel® Math Kernel Library 11.3 Beta for Linux*
  4. Intel® Threading Building Blocks 4.3 Update 4
  5. Intel® System Debugger 2016 Beta
    • Intel® System Debugger notification module xdbntf.ko (Provided under GNU General Public License v2)
  6. OpenOCD 0.8.0 library (Provided under GNU General Public License v2+)
    • OpenOCD 0.8.0 source (Provided under GNU General Public License v2+)
  7. GNU* GDB 7.8.1 (Provided under GNU General Public License v3)
    • Source of GNU* GDB 7.8.1 (Provided under GNU General Public License v3)
  8. SVEN Technology 1.0 (SDK provided under GNU General Public License v2)
  9. Intel® VTune™ Amplifier 2016 Beta for Systems with Intel® Energy Profiler
    • Intel® VTune™ Amplifier Sampling Enabling Product (SEP) 3.15
    • Intel® Energy Profiler
    • WakeUp Watch for Android* 3.1.6
    • SoC Watch for Android* 1.5.1
  10. Intel® Inspector 2016 Beta for Systems
  11. Intel® System Studio System Analyzer 2014 R4

What's New and Product Contents of Intel® System Studio 2015

Product Contents of previous Intel® System Studio releases

 

Get Help or Advice

Getting Started?
Click the Learn tab for guides and links that will quickly get you started.
Support Articles and White Papers – Solutions, Tips and Tricks

Resources
Documentation
Training Material

Support

We are looking forward to your questions and feedback. Please don't hesitate to escalate any questions you have or issues you run into. We thank you for helping us to continuously improve Intel® System Studio

Intel® Premier Support – (registration is required) - For secure, web-based, engineer-to-engineer support, visit our Intel® Premier Support web site. Intel Premier Support registration is required. Once logged in search for the product name Intel® System Studio for Linux*.

Please provide feedback at any time:

 

Intel® Architecture Support Guide for Android* Middleware Providers

$
0
0

Download PDF

x86 support has been part of Android since 2011 and nowadays, as flagship products like the Dell Venue* 8 7840, Nokia* n1, Google Nexus* Player, and more than 200 other devices are based on Intel® architecture, it’s becoming more than important for middleware software providers to support x86 devices.

When choosing third-party middleware, companies look carefully at what CPU-architectures are supported, as the choice will have a direct impact on the compatibility of the final application, potentially making or breaking the deal.

In many cases supporting x86 is necessary. But note that it only makes sense for middleware that actually uses architecture-specific binaries (usually .so files packaged into the final APKs).  It’s possible to check for the presence of .so files in an APK using a Zip archive viewer, aapt dump badging command, or Native Libs Monitor:

Native Libs Monitor Details

As of NDK r10d, Android supports these CPU architectures (ABIs): armeabi, armeabi-v7a, x86, x86_64, arm64-v8a, mips, and mips64.

The ways of adding x86/x86_64 support entirely depends on what exactly is put into developer hands (Source code? Libraries? Toolchains? Services?). Generally, adding such support is done by following Android best development practices.

Since Android originally supported only ARM*, some providers didn’t pay much attention to supporting other platforms. To overcome this issue, Intel is providing the ability to translate ARM binaries at runtime on Intel devices, and it’s working well. However, middleware providers should offer native support for Intel® Architectures.

This document gives an overview of what has to be done at the library/engine level to better support Intel platforms.

Two main cases are identified:

  1. A component/library (OpenCV, ffmpeg, …) is provided. Developers can integrate it into their own Android project.
  2. A complete stack/service (like Unity, Adobe Air*, Corona*, and so on) is provided. The software or service handles the Android project creation and/or APK packaging.

In both cases, documentation and a sample project should be available to customers, which leads of course to the proper packaging and inclusion of the x86/x86_64 libs in the final applications.

Providing a library/engine to be reused or integrated by Android developers

Libraries and engines with native components can be distributed under different forms: several .jars, an .aar, a NDK project, an Android project, etc.

If the library consists of a Java* library and .so files, it should be distributed as an .aar instead of .jars accompanied by .so files. AARs are .jar files extended for Android that allow the inclusion of resources and native libraries. Android developers can easily include AARs once they have been uploaded to Maven Central Repository / jCenter.

You should place .so files in an .aar under ./jni/<ABI>.  For x86, place .so files under ./jni/x86/.

In case the library is distributed as NDK prebuilts (.so or .a files) to be reused from a NDK project, the NDK module declaration should be provided in an Android.mk file by using the dynamic variable $(TARGET_ARCH_ABI) in order to point to the right file depending on the various target architectures:

LOCAL_PATH := $(call my-dir)
include $(CLEAR_VARS)
LOCAL_MODULE := yourlib
LOCAL_SRC_FILES := prebuilts/$(TARGET_ARCH_ABI)/libYourLib.so
LOCAL_EXPORT_C_INCLUDES := $(LOCAL_PATH)/prebuilts/includes
include $(PREBUILT_SHARED_LIBRARY)

Also set all the supported architectures in the Application.mk file, like so:

APP_ABI := all # can also be: armeabi-v7a x86 x86_64 arm64-v8a…

If the library/engine is distributed through a complete Android project, .so files have to be included in the proper location:

  • In an Eclipse*/Apache Ant* project, place them under libs/<ABI>.
  • In an Android Studio/gradle project, place them under src/main/jniLibs/<ABI>.

To change the jniLibs location for a project managed by gradle, use the jniLibs.srcDir property from the build.gradle file. For example, you can set it back to the same location as Eclipse, to libs, where NDK libs are generated:  sourceSets.main { jniLibs.srcDir 'src/main/libs' }.

Application packaging

When providing a full service and/or software that handles APK packaging, you need to properly generate compatible APKs.

The recommended (and default) way is to package one APK and put all the .so files for all the supported architectures in the APK under lib/<ABI>/. Where <ABI> can be armeabi, armeabi-v7a, x86, x86_64, arm64-v8a, mips, mips64.

Of course, you should include x86 at a minimum to better support x86 platforms:

Native Libs Monitor  APK - x86 platforms

To get these .so files properly packaged under lib/<ABI>, by default they have to be placed under libs/<ABI> when using an Ant/Eclipse project, or under src/main/jniLibs/<ABI> when using a gradle/Android Studio project.

When having support for 3rd party plugins that can embed architecture-specific binaries, like Unity has, attention should be given to be sure these plugins are compatible with all the supported platforms. Before Android 5.0, it was possible to still load ARM libs from an x86 folder. But this isn’t possible anymore and leads to errors such has “dlopen failed: ‘libMyLib.so’ has unexpected e_machine: 40”. So plugins have to be upgraded to also include x86 binaries and the engine/service has to enforce that, in order to allow a smooth transition.

Reducing the size of the produced APKs

If the size of .so files is too large to make embedding several versions of these libs feasible, you can package one APK per architecture. The only requirements are to generate one APK with a different set of .so files placed in the right lib/<ABI> folder and to have different versionCodes, following this rule: x86_64 > x86 > arm64-v8a > armeabi-v7a > armeabi > mips64 > mips version code.

When using gradle, all this can be achieved seamlessly (all the proper APKs are generated in one build) using splits and dynamic version code, by adding the following code to build.gradle:

android {
...
splits {
   	abi {
       	enable true
       	reset()
       	include 'x86', 'x86_64', 'armeabi-v7a', 'arm64-v8a'
       	universalApk true
   	}
   }
   // map for the version code
   project.ext.versionCodes = ['armeabi': 1, 'armeabi-v7a': 2, 'arm64-v8a': 3, 'mips': 5, 'mips64': 6, 'x86': 8, 'x86_64': 9]
   applicationVariants.all { variant ->
   	// assign different version code for each output
   	variant.outputs.each { output ->
       	output.versionCodeOverride =
               	project.ext.versionCodes.get(output.abiFilter, 0) * 1000000 + android.defaultConfig.versionCode
   	}
   }
...
}

To upload multiple APKs to the Google Play* Store for a single application, you are required to switch to Advanced mode before uploading the second APK:

Advanced mode to upload new APKs

Once all the APKs are uploaded, the summary of the available APKs should look like this:

The summary of the available APKs

Conclusion

Adding Intel architecture support when it’s possible is usually quite simple, and middleware suppliers should consider offering it in their software so their customers can use it. The Android Native Development Kit already has this capability embedded since 2011.

If you need help with recompilation issues, Intel® Developer Zone (software.intel.com) features other articles on this topic, such as NDK Apps porting methodologies   and NEON* to Intel® SSE instructions automatic porting solution.

To test your applications, Intel is offering free use of Intel® processor-based devices through well-known remote test services like AppThwack*, testdroid*, and Testin*.


Handling Offline Capability and Data Sync in an Android* App – Part 2

$
0
0

Download PDF

Abstract 

Mobile apps that rely on backend servers for their data needs should provide seamless offline capability. To provide this capability, apps must implement a data sync mechanism that takes connection availability, authentication, and battery usage, among other things, in to account. In Part 1, we discussed how to leverage the Android sync adapter framework to implement these features in a sample restaurant app, mainly using content provider. In this part we will explain the remaining pieces, the sync adapter and authenticator. We will also look at how to use Google cloud messaging (GCM) notifications to trigger the data sync with a backend server.

Contents

Abstract
Overview
Data Sync Strategy for Restaurant Sample App – Little Chef
Sync Adapter Implementation
Authenticator Implementation
Configuring and Triggering the Sync
About the Author

Overview 

If you haven’t already read Part 1, please refer to the following link:

https://software.intel.com/en-us/articles/handling-offline-capability-and-data-sync-in-an-android-app-part-1

Part 1 covers the integration of content provider with our sample app, which uses local SQLite database.

Though the content provider is optional for sync adapter, it abstracts the data model from other parts of the app and provides a well-defined API for integrating with other components of Android framework (for example, loaders).

To fully integrate Android sync adapter framework into our sample app, we need to implement the following pieces: sync adapter, a sync service that links the sync adapter with Android sync framework, authenticator, and an authenticator service to bridge the sync adapter framework and authenticator.

For the authenticator we will use a dummy account for demo purposes.

Data Sync Strategy for Restaurant Sample App – Little Chef 

As we discussed in previous articles, “Little Chef” is a sample restaurant app (Figure 1) with several features including menu content, loyalty club, and location-based services among others. The app uses a backend server REST API to get the latest menu content and updates. The backend database can be updated using a web frontend. The server can then send GCM notifications for data sync as required.

A Restaurant Sample App Little Chef
Figure 1: A Restaurant Sample App - Little Chef

When the restaurant manager updates menu items on the backend server, we need an efficient sync strategy for propagating these changes to all the deployed mobile devices/apps.

Sync adapter framework has several ways to accomplish this—at regular intervals, on demand—when the network becomes available. If the app mainly relies on data coming from server, we can use GCM notifications to inform all the clients to sync.  This is more efficient and reduces unnecessary sync requests, saving battery and other resource usage. This is the approach taken in Little Chef sample app. For details on other sync strategies please refer https://developer.android.com/training/sync-adapters/running-sync-adapter.html

We also use a simple database version tagging to determine if the local SQLite data model is out of sync with the backend data model. For every change made on the backend server, the sever DB version tag is increased. When we receive a sync request, we compare the local DB version and the remote DB version, and only if they differ do we proceed with the sync. As the sync adapter implementation just relies on REST API end-points, it is agnostic to any server-side implementation specifics.

Ideally, the server and client need to keep track of all the DB records that have changed and replay those changes on the client side. As our sample app data model is small, the actual sync is going to replace the local data with the latest copy from server (but only when the DB versions differ).

Sync Adapter Implementation 

We implement the Sync Adapter by extending the AbstractThreadedSyncAdapter class, the main method to focus on is onPerformSync. The actual sync logic resides here. The Sync Adapter framework by itself does not provide any data transfer, connection, or conflict resolution, it just calls this method whenever a sync is triggered. It does run the Sync Adapter in a background thread, so at least we do not have to worry about launch issues.

In the code snippet below, onPerformSync uses the Retrofit* REST client library to get the latest server DB version. It compares it with local DB version and determines if a sync is required. If a sync is required, it will do another REST call to download all the menu items data from the server and replace the local content with the one from server.

Content Providers come in handy here. We can issue a “notify” to all Content Provider listeners. As the sample app uses Loaders and CursorAdapter to display the Menu items, they automatically get refreshed with new values immediately after the sync.

public class RestaurantSyncAdapter extends AbstractThreadedSyncAdapter {
    private static final String TAG = "RestaurantSyncAdapter";

    private SharedPreferences sPreferences;
    private RestaurantRestService restaurantRestService;
    private ContentResolver contentResolver;

    private void init(Context c) {
        sPreferences = PreferenceManager.getDefaultSharedPreferences(c);

        RestAdapter restAdapter = new RestAdapter.Builder()
                .setEndpoint("http://my.server.com/")
                .build();
        restaurantRestService = restAdapter.create(RestaurantRestService.class);
        contentResolver = c.getContentResolver();
    }

    public RestaurantSyncAdapter(Context context, boolean autoInitialize) {
        super(context, autoInitialize);
        init(context);
    }

    public RestaurantSyncAdapter(Context context, boolean autoInitialize, boolean allowParallelSyncs) {
        super(context, autoInitialize, allowParallelSyncs);
        init(context);
    }

    private ContentValues menuToContentValues(RestaurantRestService.RestMenuItem menuItem) {
        ContentValues contentValues = new ContentValues();
        contentValues.put("_id", menuItem._id);
        contentValues.put("category", menuItem.category);
        contentValues.put("description", menuItem.description);
        contentValues.put("imagename", menuItem.imagename);
        contentValues.put("menuid", menuItem.menuid);
        contentValues.put("name", menuItem.name);
        contentValues.put("nutrition", menuItem.nutrition);
        contentValues.put("price", menuItem.price);
        return contentValues;
    }

    @Override
    public void onPerformSync(Account account, Bundle bundle, String s,
                              ContentProviderClient contentProviderClient, SyncResult syncResult) {
        try {
            // Check if any DB changes on server
            int serverDBVersion = restaurantRestService.dbVersion().user_version;
            int localDBVersion = sPreferences.getInt("DB_VERSION", 0);
            Log.d(TAG, "onPerformSync: localDBversion " + Integer.toString(localDBVersion) + " serverDBVersion " + Integer.toString(serverDBVersion));
            if (serverDBVersion > 0 && serverDBVersion != localDBVersion) {
                // fetch menu items from server and update the local DB
                List<ContentValues> contentValList = new ArrayList<>();
                for (RestaurantRestService.RestMenuItem menuItem: restaurantRestService.menuItems()) {
                    ContentValues contentValues = menuToContentValues(menuItem);
                    contentValues.putNull("_id");
                    contentValList.add(contentValues);
                }
                int deletedRows = contentProviderClient.delete(RestaurantContentProvider.MENU_URI,null,null);
                int insertedRows = contentProviderClient.bulkInsert(RestaurantContentProvider.MENU_URI, contentValList.toArray(new ContentValues[contentValList.size()]));
                Log.d(TAG, "completed sync: deleted " + Integer.toString(deletedRows) + " inserted " + Integer.toString(insertedRows));

                // update local db version
                sPreferences.edit().putInt("DB_VERSION", serverDBVersion).commit();

                // notify content provider listeners
                contentResolver.notifyChange(RestaurantContentProvider.MENU_URI, null);
            }

        } catch (Exception e) {
            Log.d(TAG, "Exception in sync", e);
            syncResult.hasHardError();
        }
    }
}
Code Snippet 1, Sync Adapter Implementation ++

The Sync Adapter gets instantiated via its corresponding Sync Service. Implementing Sync Service is straightforward, just instantiate the Sync Adapter object in the OnCreate method of the Sync Service and return its Binder object in the onBind method call. Please refer to next code snippet.

public class RestaurantSyncService extends Service {
    private static final String TAG = "RestaurantSyncService";
    private static final Object sAdapterLock = new Object();
    private static RestaurantSyncAdapter sAdapter = null;
    @Override
    public void onCreate() {
        super.onCreate();
        Log.e(TAG, "onCreate()");
        synchronized (sAdapterLock) {
            if (sAdapter == null) {
                sAdapter = new RestaurantSyncAdapter(getApplicationContext(), true);
            }
        }
    }
    @Override
    public IBinder onBind(Intent intent) {
        return sAdapter.getSyncAdapterBinder();
    }
}
Code Snippet 2, Sync Service Class to instance Sync Adapter ++

We only have two more items to complete the Sync Adapter implementation: creating an xml file describing the Sync Adapter configuration (metadata), and, like any Android Service, adding our Sync Service to Android Manifest entry.

We can give any name to config xml file (for example, syncadapter.xml) and place it in res/xml folder.

<?xml version="1.0" encoding="utf-8"?><sync-adapter
    xmlns:android="http://schemas.android.com/apk/res/android"
    android:contentAuthority="com.example.restaurant.provider"
    android:accountType="com.example.restaurant"
    android:userVisible="false"
    android:supportsUploading="false"
    android:allowParallelSyncs="false"
    android:isAlwaysSyncable="true"/>
Code Snippet 3, syncadapter.xml ++

For detailed explanation of each field, please refer to https://developer.android.com/training/sync-adapters/creating-sync-adapter.html#CreateSyncAdapterMetadata

Please note in Code Snippet 3, we use “com.example.restaurant” as the accountType. We will use this when implementing the Authenticator.

And the Android Manifest entry for Sync Service is shown below. We refer to the above Sync Adapter xml in android:resource under the meta-data entry.

<service
    android:name=".RestaurantSyncService"
    android:enabled="true"
    android:exported="true"
    android:process=":sync"><intent-filter><action android:name="android.content.SyncAdapter" /></intent-filter><meta-data
        android:name="android.content.SyncAdapter"
        android:resource="@xml/syncadapter" /></service>
Code Snippet 4, Sync Service entry in Android Manifest ++

Please use the official documentation for detailed reference: https://developer.android.com/training/sync-adapters/creating-sync-adapter.html

Authenticator Implementation 

Android Sync Adapter framework requires an Authenticator to be part of the implementation. This can be very useful if we need to implement backend authentication. We can then leverage Android Accounts API for seamless integration.

For Sync Adapter framework to work, we need an account, even a dummy account works. In this case, we can use the default stub implementation for Authenticator component. This makes our Authenticator implementation a lot easier.

Similar to Sync Adapter implementation, we first create an Authenticator class and an Authenticator Service to go with it, then we create a metadata xml for Authenticator, and of course the Android Manifest entry for Authenticator Service.

We implement Authenticator by extending the AbstractAccountAuthenticator class. Use your favorite IDE to generate default/stub method implementations.

public class Authenticator extends AbstractAccountAuthenticator {

    // Simple constructor
    public Authenticator(Context context) {
        super(context);
    }

    @Override
    public Bundle editProperties(AccountAuthenticatorResponse accountAuthenticatorResponse, String s) {
        throw new UnsupportedOperationException();
    }

    @Override
    public Bundle addAccount(AccountAuthenticatorResponse accountAuthenticatorResponse, String s, String s2, String[] strings, Bundle bundle) throws NetworkErrorException {
        return null;
    }

    @Override
    public Bundle confirmCredentials(AccountAuthenticatorResponse accountAuthenticatorResponse, Account account, Bundle bundle) throws NetworkErrorException {
        return null;
    }

    @Override
    public Bundle getAuthToken(AccountAuthenticatorResponse accountAuthenticatorResponse, Account account, String s, Bundle bundle) throws NetworkErrorException {
        throw new UnsupportedOperationException();
    }

    @Override
    public String getAuthTokenLabel(String s) {
        throw new UnsupportedOperationException();
    }

    @Override
    public Bundle updateCredentials(AccountAuthenticatorResponse accountAuthenticatorResponse, Account account, String s, Bundle bundle) throws NetworkErrorException {
        throw new UnsupportedOperationException();
    }

    @Override
    public Bundle hasFeatures(AccountAuthenticatorResponse accountAuthenticatorResponse, Account account, String[] strings) throws NetworkErrorException {
        throw new UnsupportedOperationException();
    }
}
Code Snippet 5, Authenticator Stub Implementation ++

The Authenticator gets instantiated via its corresponding Authenticator Service. Implementing Authenticator Service is straightforward. Just  instantiate the Authenticator object in the OnCreate method of the Authenticator Service and return its Binder object in onBind method call. Please refer to Code Snippet 6.

public class AuthenticatorService extends Service {
    private Authenticator mAuthenticator;
    public AuthenticatorService() {
    }

    @Override
    public void onCreate() {
        // Create a new authenticator object
        mAuthenticator = new Authenticator(this);
    }

    @Override
    public IBinder onBind(Intent intent) {
        return mAuthenticator.getIBinder();
    }
}
Code Snippet 6, Authenticator Service Class to instance Authenticator ++

The Authenticator configuration xml metadata is shown below. Please note the accountType is the same as the one we used in Sync Adapter metadata. This is important as we must use the same metadata in both locations.

<?xml version="1.0" encoding="utf-8"?><account-authenticator
    xmlns:android="http://schemas.android.com/apk/res/android"
    android:accountType="com.example.restaurant"
    android:icon="@drawable/ic_launcher"
    android:smallIcon="@drawable/ic_launcher"
    android:label="@string/app_name"/>
Code Snippet 7, authenticator.xml in res/xml folder ++

Finally, we need to create Android Manifest entry for Authenticator Service, please see code snippet 8. Notice the above authenticator metadata is referred by android:resource under meta-data.

<service
    android:name=".AuthenticatorService"
    android:enabled="true"><intent-filter><action android:name="android.accounts.AccountAuthenticator" /></intent-filter><meta-data
        android:name="android.accounts.AccountAuthenticator"
        android:resource="@xml/authenticator" /></service>
Code Snippet 8, Android Manifest entry for Authenticator Service ++
 

Configuring and Triggering the Sync 

Now that we have all the pieces—Content Provider, Sync Adapter, and Authenticator in place—we just need to tie it all together to be able to trigger a sync whenever required.  Well, technically Android Framework automatically does all the magic, but we still need to configure the trigger.

As discussed earlier, there are several ways to trigger a sync. For the sample app, we use an incoming GCM notification with sync attribute as the trigger. We can also trigger the sync at app start up time, or maybe in OnCreate or OnResume of the Main Activity.

private Account createDummyAccount(Context context) {
    Account dummyAccount = new Account("dummyaccount", "com.example.restaurant");
    AccountManager accountManager = (AccountManager) context.getSystemService(ACCOUNT_SERVICE);
    accountManager.addAccountExplicitly(dummyAccount, null, null);
    ContentResolver.setSyncAutomatically(dummyAccount, RestaurantContentProvider.AUTHORITY, true);
    return dummyAccount;
}

@Override
protected void onResume() {
    super.onResume();
    checkGooglePlayServices();
    ContentResolver.requestSync(createDummyAccount(this), RestaurantContentProvider.AUTHORITY, Bundle.EMPTY);
}
Code Snippet 9, Create a dummy account and trigger an ondemand sync in Main Activity ++

We use a dummy account with “com.example.restaurant” accountType, which we configured in both Sync Adapter and Authenticator xml metadata.  We also explicitly call setSyncAutomatically on ContentResolver, as it is required. This can be avoided if you specify android:syncable=“true" on your “provider” Android Manifest entry.

The actual Sync request is done using the ‘requestSync’ method on ContentResolver.

We can issue the same on-demand sync method call when we receive a GCM notification in Broadcast Receiver.

public class GcmBroadcastReceiver extends BroadcastReceiver {
    private static final String TAG = GcmBroadcastReceiver.class.getSimpleName();
    public GcmBroadcastReceiver() {
    }

    @Override
    public void onReceive(Context context, Intent intent) {
        GoogleCloudMessaging gcm = GoogleCloudMessaging.getInstance(context);
        if (GoogleCloudMessaging.MESSAGE_TYPE_MESSAGE.equals(gcm.getMessageType(intent)) &&
                intent.getExtras().containsKey("com.example.restaurant.SYNC_REQ")) {
            Log.d(TAG, "GCM sync notification! Requesting DB sync for server dbversion " + intent.getStringExtra("dbversion"));
            ContentResolver.requestSync(new Account("dummyaccount", "com.example.restaurant"),
                    RestaurantContentProvider.AUTHORITY, Bundle.EMPTY);
        }
    }
}
Code Snippet 10, GCM notification handling to trigger background sync ++

This will trigger the background sync anytime the server sends a GCM notification with sync attribute.

About the Author 

Ashok Emani is a software engineer in the Intel Software and Services Group. He currently works on the Intel® Atom™ processor scale-enabling projects.

Intel OBL Sample Source Code License (MS-LPL Compatible)

Performance Promise of OpenCV* 3.0 and Intel® INDE OpenCV

$
0
0

Introduction

The Intel® Integrated Native Developer Experience (Intel® INDE) is a cross-architecture productivity suite that provides developers with tools, support, and IDE integration to create high-performance C++/Java* applications for Windows* on Intel® architecture and Android* on ARM* and Intel® architecture.

The new OpenCV beta, a feature of Intel INDE, is compatible with the new open source OpenCV 3.0 beta (Open Source Computer Vision Library: http://opencv.org). Provides free binaries for computer vision applications development and production for usages like enhanced photography, augmented reality, video summarization, and more.

Key features of the Intel® INDE OpenCV are

  • Compatibly with OpenCV 3.0
  • Pre-build and validated binaries, cleared of IP protected building blocks.
  • Easy to use and to maintain with IDE integration for both Windows and Android development.
  • Optimized for Intel® platforms with heterogeneous computing.

This document is focused on the performance. Refer to the Getting Started with Intel® INDE OpenCV for the full list of Intel INDE OpenCV features.

While, the OpenCV 3.0 Transparent API (described in the “OpenCV 3.0 Architecture Guide for Intel INDE OpenCV” document) creates an opportunity for GPU computing, the free subset of Intel IPP provides a powerful implementation of OpenCV functions for Intel CPUs. Very few libraries offer GPU acceleration paired with efficient CPU fallback in a way that is transparent to the user. This document, describes both the original (open source) OpenCV improvements, and performance features that are unique for the Intel INDE OpenCV version.

Introducing OpenCV 3.0

OpenCV 3.0 is a new iteration of the now de-facto standard library for vision and image processing. Since its’ Alpha version it introduces important changes in OpenCV architecture. Directly from the changelog:

  • The new technology is nick-named "Transparent API" and, in brief, is extension of classical OpenCV functions, such as cv::resize(), to use OpenCL underneath. See more details about here:T-API

Recently, the OpenCV foundation announced availability of OpenCV 3.0, which brings significant performance improvements for Intel SoC. Again, refer to the OpenCV changelog:

  • Performance of OpenCL-accelerated code on Intel Iris Graphics and Intel Iris Pro Graphics has been improved by 10%-230%
  • On x86 and x64 platforms OpenCV binaries include and use a subset of Intel® Integrated Performance Primitives (Intel® IPP) by default. OpenCV 3.0 beta includes a subset of Intel® IPP 8.2.1 with additional optimization for AVX2.
    Intel INDE OpenCV is based exactly on the OpenCV 3.0 Beta community sources and contains preview of even more Intel’s specific optimizations and features (detailed below) that are not part of the public OpenCV “stock” version yet.

Notice that official “Beta” status of the current OpenCV release actually implies that there still might be performance changes by the final OpenCV 3.0 (“Gold”) release.

Intel INDE OpenCV Performance

Methodology

This document relies on the results from the OpenCV performance tests hosted on GitHub. These tests measure performance across multiple different variables, including OpenCV function, image size, border handling scheme, and function-specific parameters like filter size. In recent versions of the test suite, this adds up to almost 3,000 distinct tests that cover optimized functions. In order to make sense of these tests, this document shows the speedup numbers as a geometric mean across the various tests that cover an individual function.

Performance Gains from Direct OpenCV 3.0 Optimizations

Figure 1 shows the example performance gains achieved via the OpenCV 3.0 optimizations made by Itseez. This chart shows the geometric mean across different tests of an individual function. The results measure the impact of a specific changelist by comparing against the immediate predecessor.

 Speedup from Itseez Optimizations Intel HD Graphics

Figure 1. Performance gains from OpenCL optimizations for OpenCV 3.0, on Intel HD Graphics OpenCL device.
Bars are tests geomeans for each individual OpenCV function. Refer to the IDF’14 presentation “Intel® Processor Graphics: Optimizing Computer Vision and More” for details on the applied optimizations.

These results show substantial performance gains on Intel Processor Graphics. Similarly, the IPP-enabled path was significantly improved in OpenCV 3.0 (by use of the free Intel IPP subset, available for OpenCV users). So today the IPP-enabled path offers significant acceleration on the Intel CPUs, comparing to the default C code. As INDE-OpenCV is based exactly on the community OpenCV 3.0 sources, all the optimizations we discussed so far are also included in the INDE-OpenCV.

 

Default OpenCV 3.0 Logic behind Performance Code Paths

The community OpenCV 3.0 is equipped with two major performance paths beyond the “plain” fallback in C/C++:

  • OpenCL flavor of OpenCV functions, running on the Intel Processor Graphics OpenCL device.
  • Intel IPP-enabled path, running on the CPU.

The community OpenCV makes very coarse decisions about which function to run on which particular piece of hardware. For example, with the original OpenCV 3.0 Beta if you use UMat data type and call an OpenCV function (that has an OpenCL implementation) then the function runs on the GPU. Otherwise, the function runs on the CPU. Still this approach does not always result in the best performance. Intel INDE OpenCV features the dispatcher that solves this issue.

Overriding the Default Logic in Intel INDE OpenCV by Dispatcher

CPUs and GPUs have significantly different architectures that make them better suited to different tasks. Often, a CPU performance is superior for complex processing on a single or few streams of data. GPU's in contrast are performing much better for data- parallel and computationally heavy tasks. Many OpenCV functions actually still run faster on modern CPUs, due to the nature of the algorithm, especially when backed by optimized/multi-threaded implementation. Just like community OpenCV, the Intel INDE OpenCV provides an Intel IPP-enabled codepath for Intel CPUs.

Moreover, Intel INDE OpenCV features a dispatcher API that enables you to specify which specific code path (for example, OpenCL or IPP) to use in each particular case. Refer to the separate document for details (dispatcher article).

Performance Analysis of Complex Pipelines

Most part of the analysis described in this document is focused on performance of individual OpenCV functions. For information on analysis of complicated pipelines that use OpenCV on Intel SoCs, refer to the Intel INDE OpenCV Profiler tutorial.

References for Further Reading

Quick Installation of Intel® INDE OpenCV

$
0
0

Introduction

The Intel® Integrated Native Developer Experience (Intel® INDE) is a cross-architecture productivity suite that provides developers with tools, support, and IDE integration to create high-performance C++/Java* applications for Windows* on Intel® architecture and Android* on ARM* and Intel® architecture.

The new OpenCV beta, a feature of Intel INDE, is compatible with the new open source OpenCV 3.0 beta (Open Source Computer Vision Library: http://opencv.org). Intel INDE OpenCV provides free binaries for computer vision applications development and production for usages like enhanced photography, augmented reality, video summarization, and more.

Key features of the Intel® INDE OpenCV are

  • Compatibly with OpenCV 3.0
  • Pre-build and validated binaries, cleared of IP protected building blocks.
  • Easy to use and to maintain with IDE integration for both Windows and Android development.
  • Optimized for Intel® platforms with heterogeneous computing.

Download the Intel® INDE Installation Package

Intel® INDE provides a comprehensivetool setfor developing applications targeting both Windows* and Android* operating systems. The first step to get started with Intel INDE is to go to the Intel® INDE Web page, and download the edition that you want to use.

Intel INDE main page - Download the packages

Unless you are willing to install multiple Intel INDE components, at the Intel INDE downloads page select the Online Installer, to avoid downloading the components you don’t need.

Choose download option that suits your needs

Select an IDE for Android* Development

Run the installer and click “Next” in the Welcome page.

In the Wizard step select the IDE you want to equip with Intel INDE Getting Started tools for Android* development.

Selecting IDEs for OpenCV integration

Intel INDE installs the Android Studio* or Eclipse* if selected at this step. If you want to use Microsoft Visual Studio* IDE for Android* application development, acquire and setup it prior to installing the Intel INDE.

  • NOTE: IDE selection is relevant for Intel INDE Android* development only. Similarly, for the Intel INDE OpenCV component, the choice indicates which IDE the installer should equip with the Intel INDE OpenCV Android* development plugins. Notice that in the current release Visual Studio* is not supported for Android development with Intel INDE OpenCV.
  • NOTE: If you plan to develop only regular Windows* applications using Intel INDE capabilities, select “Skip integration of Android* Getting Started tools”.

Refer to the Getting Started with Intel INDE OpenCV development for Windows* and Getting Started with Intel INDE OpenCV development for Android* documents for details.

Select Components to Install

Select OpenCV component in the Build category of the Intel INDE components selection screen. You can also select Intel INDE OpenCV sub-components to install.

Selecting the Intel INDE components to install

Following are subcomponents of Intel INDE OpenCV:

  • Visual Studio Support (for Windows* Targets)
    • Installs Intel INDE OpenCV binaries for Windows* OS, built with particular Visual Studio* runtime. Installation is version-specific, which is needed to avoid potential collisions of the Visual Studio* runtimes.
    • For the Visual Studio*(s) available on your machine, corresponding Intel INDE OpenCV binaries are marked for installation by default.
  • ImageWatch plugin for Visual Studio*
  • Android* Support
    • Installs Intel INDE OpenCV binaries for Android* OS. The component is automatically marked for installation if you select any IDE for Android* development in the Suit screen.
  • Java and Python language support (bindings for Intel INDE OpenCV).

You are also welcome to select any additional Intel INDE components (beyond OpenCV) that you might need as well.

Click Next to continue with the installation.

Intel® INDE OpenCV License

At this step you should read through the license agreements and if agree with the terms accept all licenses for all Intel INDE components you selected. For your convenience, the licenses are listed in the very last wizard screen, where you can scroll thru the list of individual licenses (and match them to the components).

Just like for the original Open Source edition of OpenCV, the Intel INDE OpenCV comes under (same) regular 3-cluase BSD license:

Read and accept Intel INDE license agreements

Select I accept All of these licenses if agree and click Next.

Once the installation finishes, you are ready to start developing your OpenCV code!

Also See

For more help on using Intel INDE OpenCV with the particular platform, refer to these guides:

Getting Started with Intel® INDE OpenCV for Android* Targets

$
0
0

About Intel INDE OpenCV

The Intel® Integrated Native Developer Experience (Intel® INDE) is a cross-architecture productivity suite that provides developers with tools, support, and IDE integration to create high-performance C++/Java* applications for Windows* on Intel® architecture and Android* on ARM* and Intel® architecture.

The new OpenCV beta, a feature of Intel INDE, is compatible with the new open source OpenCV 3.0 beta (Open Source Computer Vision Library: http://opencv.org). OpenCV beta provides free binaries for computer vision applications development and production for usages like enhanced photography, augmented reality, video summarization, and more.

Key features of the Intel® INDE OpenCV are:

  • Compatibly with OpenCV 3.0
  • Pre-build and validated binaries, cleared of IP protected building blocks.
  • Easy-to-use and to maintain with IDE integration for both Windows and Android development.
  • Optimized for Intel® platforms with heterogeneous computing.

This document is focused on creating OpenCV-enabled applications for Android*. If the target operating system of your application is Windows*, refer to the Getting Started with Intel’s INDE OpenCV for Windows*.

Installation Guide for Intel INDE OpenCV for Android* Targets

Refer to the Quick Installation Guide for Intel® INDE OpenCV for installation details.

Intel INDE OpenCV Android* Support Is BETA

The community version of OpenCV 3.0 Beta does not offer Android support. Intel INDE OpenCV provides preview Android 32-bit binaries.  At the time of this release, the community OpenCV 3.0 API is not finalized (still Beta). Similarly, for the Intel INDE OpenCV binaries for Android* targets, the APIs are not final, and you can expect minor API changes over time.

Some OpenCV 3.0 Features Are Limited to JNI on Android*

Notice that for the Beta release, some important OpenCV 3.0 features on Android* are limited to JNI only. JNI stands for Java Native Interface - an application development approach with the C/C++ code communicating with the rest of Java via JNI.

For example, UMat support is available only in the native C++ code, and not available through the OpenCV Java API. For details refer to the OpenCV 3.0 Architecture Guide for Intel INDE OpenCV.

Still the Intel INDE OpenCV preview binaries for Android* enable you to start developing and testing applications for Android* targets.

Components of the Intel INDE OpenCV for Android Targets

Intel INDE OpenCV contains the following components for Android* targets:

  • Ready-to-use binaries for Android* application development (x86)
  • Intel INDE OpenCV version of the OpenCV4Android
  • Integration into Eclipse* and Android Studio* IDEs

Intel INDE OpenCV Android*-related files structure is as follows:

<Intel-OpenCV root dir> (e.g. C:\Intel\INDE\OpenCV)
|_sdk             - INDE OpenCV “root” folder for Android (x86)
 |_aar            - binary distribution of the INDE OpenCV library (for Android Studio*)
 |_etc            - classifiers data for object detection functions (in xml)
 |_java           - root folder for INDE OpenCV Android (Java)   
   |_3rdparty     - 3rd party components libs (like libtiff, libjpeg, etc)
   |_libs         - static (“.a”) and dynamic (“.so”) libraries INDE OpenCV
   |_res          - resource files (strings, icons, etc) for INDE OpenCV
   |_src          - INDE OpenCV Java and application helper classes   
 |_native         - root folder for Native INDE OpenCV for Android (for C/C++ dev)  
   |_3rdparty     - 3rd party components (like libtiff, libjpeg, etc)
   |_jni          - Native Interface for INDE OpenCV
     |_include    - header files for INDE OpenCV
   |_libs         - static (“.a”) and dynamic (“.so”) libraries INDE OpenCV

Intel INDE OpenCV: Alternative to the Android* OpenCV Manager

Android* OpenCV Manager (http://docs.opencv.org/platforms/android/service/doc/index.html) is a service that manages OpenCV library binaries on the end-user devices. The service uses a mechanism of constants (tags) to differentiate OpenCV versions, which is explained in the OpenCV docs as well.

You can switch your application to use the Intel INDE OpenCV version. Intel INDE OpenCV binaries are not available via the Android OpenCV Manager. To enable Intel INDE OpenCV binaries support in your application, provide a specific tag (see the code example below) to the regular initAsync() call. Otherwise the application uses the community OpenCV libraries by default.

This chapter is relevant to the existing OpenCV-enabled application that use the Android OpenCV Manager via the initAsync() method of the Java OpenCV Loader. With the Intel INDE OpenCV, you should use this function even though the binaries are actually loaded from the application’s *.apk file. Applications created with the Intel INDE OpenCV Android Studio* and Eclipse* wizards use this method by default.

Notice that Intel INDE OpenCV binaries should be explicitly packaged into the resulting *.apk for your application. The ways to pack the binaries are described in the IDE-specific chapters of this article.

General machinery of the actual binaries loading and associated callbacks are explained in http://docs.opencv.org/platforms/android/service/doc/index.html. The only specific of Intel INDE OpenCV is a dedicated versioning constant. Consider the following code example (error handling is omitted for clarity):

public void onResume()
{
   super.onResume();
       // load INDE OpenCV binaries right from the apk using the specific tag
       if(!OpenCVLoader.initAsync(
           OpenCVLoader.OPENCV_INTEL_INDE_VERSION_3_0_0_PREVIEW,
           this, mLoaderCallback))
       {
           //if failed, report error and exit
       }
}

Applications, created with the Intel INDE OpenCV wizards for Android Studio* and Eclipse* use the correct tag by default.

Creating OpenCV-Enabled Applications Using Intel INDE OpenCV IDE Wizards

Intel INDE is a turnkey development suite for Android. Upon Intel INDE installation, Intel INDE OpenCV enables you to develop computer vision applications with Eclipse* and Android Studio* IDEs.

Refer to the Quick Installation Guide for Intel INDE OpenCV for installation details per IDE.

Android Studio* Support with Intel INDE OpenCV: Creating New Project

Intel INDE OpenCV provides Java and JNI project wizards for Android Studio*. JNI stands for Java Native Interface. It is an application development approach with the C/C++ code communicating with the rest of Java via JNI. According to the JNI approach, your code uses Intel INDE OpenCV directly in C/C++, just like any other native library.

To create any type of application,

  1. Start with a regular Create New Project wizard (Ctrl-N).
  2. In the Configure your new project step, specify the Application name, project location, and so on:Configuring new project in Android Studio*
  3. In the Select the form factors your app will run on, select Phone and Tablet and specify the API Level 19 to match general Intel INDE requirements:Setting API level in Android Studio*

  4. Select either (pure Java) Intel INDE OpenCV Project or Intel INDE OpenCV JNI Project:Select either Intel INDE OpenCV Project or Intel INDE OpenCV JNI Project

  5. Finally, specify the Activity Name, Layout Name, Title, Main Resource Name for the new project activity:Specify the Activity Name, Layout Name, Title, Main Resource Name

  6. Click Finish and start working with your first Intel INDE OpenCV project in the Android Studio*:Ready Intel INDE OpenCV project in the Android Studio*

 

We generally recommend to start with JNI projects (“Intel INDE OpenCV Android Project” wizard). To edit the JNI files, navigate to the “<your_project_dir>/jni” folder directly.

Android Studio Support with Intel INDE OpenCV: Enable an Existing Project

To enable Intel INDE OpenCV support for an existing project, patch the build.gradle file for each module where you use Intel INDE OpenCV:

Patch the build.gradle file

Depending on whether you are going to use Intel INDE OpenCV via its Java interface or via JNI support, slightly different patches should be applied to the build.gradle file for your module. Specifically, use different AAR files.

If your module uses pure Java OpenCV interface and you don’t have any native OpenCV code built with NDK, the following snippet should be added into the end of module’s build.gradle file:

repositories {
    flatDir {
        dirs System.getenv("INDE_OPENCV_AAR_DIR")
    }
}
dependencies {
    compile(name: 'openCVLibrary300intelJava', ext: 'aar')
}

For the JNI project, re-create activity with help of the Intel INDE OpenCV JNI Project template instead of enabling an existing module with the Intel INDE OpenCV support. If using the template is not acceptable, do the following:

  1. Patch the build.gradle file with the following code (no Java suffix in the name of AAR file):

    repositories {
        flatDir {
            dirs System.getenv("INDE_OPENCV_AAR_DIR")
        }
    }
    dependencies {
        compile(name: 'openCVLibrary300intel', ext: 'aar')
    }

    In the Android.mk file, add the following line right after the include $(CLEAR_VARS) line:

    include $(INDE_OPENCV_DIR)\sdk\native\jni\OpenCV.mk
  2. Make sure that APP_STL := gnustl_shared is used in the Application.mk file to be compatible with the Intel INDE OpenCV binaries. Using other types of the runtime library may lead to undefined behavior during application execution.

Eclipse* Support with Intel INDE OpenCV: Creating A New Project

Intel INDE OpenCV provides an Eclipse* project wizard for creating Android* applications. To create a new project with Intel INDE OpenCV support, select File> New> Project (Ctrl -N).

Select OpenCV Project from the list of projects and enter name and location for the new project in the next wizard page:Select OpenCV Project from the list of projects Enter name and location for the new project

  Once finished, click Finish.

Now both the new project and Intel INDE OpenCV library project are imported and opened in Eclipse*, with the project’s dependencies and Java build paths resolved accordingly. So just start coding!

Project and OpenCV library are opened in Eclipse*

If you are willing to develop OpenCV-enabled applications using JNI, refer to the Android Studio* instead (previous section).

Eclipse* Support with Intel INDE OpenCV: Enable an Existing Project

To enable Intel INDE OpenCV for an existing Android* (Java) project, right-click your project in the Package Explorer view in Eclipse* and select Enable OpenCV support:

Enable OpenCV support

Using this command, you import the Intel INDE OpenCV library into your workspace and set dependencies to the Intel INDE OpenCV for your project:

Set dependencies for your project

Note on Development for Android* Targets with Microsoft Visual Studio*

Notice that in the current release, Visual Studio is not supported for Android* development with Intel INDE OpenCV. Please select other IDEs like Android Studio or Eclipse during the installation. Refer to the Quick Installation of Intel® INDE-OpenCV for the installation details.

Intel® INDE OpenCV - Release Notes

$
0
0

Introduction

The Intel® Integrated Native Developer Experience (Intel® INDE) is a cross-architecture productivity suite that provides developers with tools, support, and IDE integration to create high-performance C++/Java* applications for Windows* on Intel® architecture and Android* on ARM* and Intel® architecture.

The new OpenCV beta, a feature of Intel INDE, is compatible with the new open source OpenCV 3.0 beta (Open Source Computer Vision Library: http://opencv.org). OpenCV beta provides free binaries for computer vision applications development and production for usages like enhanced photography, augmented reality, video summarization, and more.

Key features of the Intel® INDE OpenCV are:

  • Compatibly with OpenCV 3.0
  • Pre-build and validated binaries, cleared of IP protected building blocks.
  • Easy-to-use and maintain, with IDE integration for both Windows and Android development.
  • Optimized for Intel® platforms with heterogeneous computing.

The official “Beta” status of the current Intel INDE OpenCV release implies that there might be API changes in the final OpenCV 3.0 (“Gold”) release. Therefore, whenever you start development with the Intel INDE OpenCV 3.0 Beta, you might need to change your code to make sure it works with “Gold” versions of the component.

Refer to the known issues section below for information on possible incompatibility.

The Intel INDE OpenCV Beta release focuses on simplifying the development process and shortening development time. You do not need to configure, build, or integrate the OpenCV libraries into your environment yourself. This enables you to start development immediately.

To learn more about this product, refer to the Getting started with Intel INDE OpenCV component guide.

This document provides system requirements, installation instructions, issues and limitations, and legal information.

For technical support, including answers to questions not addressed in the installed product, please visit the technical support forum.

System Requirements

There are no additional requirements to install and use this product on top of the existing Intel INDE System and Software requirements listed at https://software.intel.com/en-us/intel-inde-support.

Installation Notes

Installation on Microsoft Windows* OS

You can obtain and install the Intel INDE OpenCV library on Windows* host as part of the Intel INDE installation. For detailed instructions refer to the Quick Installation Guide for OpenCV with Intel® INDE.

Uninstalling Intel INDE OpenCV

To remove the Intel INDE OpenCV, uninstall it via the Intel INDE installer. For instructions, refer to the Intel INDE Release Notes and Installation Guide.

Getting Started with Android* Targets

The currently available community version of OpenCV today (3.0 Beta) does not offer Android support. Intel INDE OpenCV comprises preview Android (32-bit only) binaries. The preview binaries enable you to explore the limited set of features and capabilities of OpenCV 3.0 on Android* targets. Today’s beta feature set for Android is limited, but will grow with future releases. For more information, including IDE integration process, refer to Getting Started with Intel INDE OpenCV for Android* Targets.

Getting Started for Windows* Targets

Intel INDE OpenCV provides ready-to-use binaries for Windows application development via Microsoft Visual Studio* IDE. The product also provides integration into Microsoft Visual Studio*. Also, during the Intel INDE installation you can mark for installation Microsoft’s ImageWatch pre-release software plugin for Visual Studio. This plug-in is shown under the OpenCV component within the Intel INDE installer. For more information, refer to Getting Started with Intel INDE-OpenCV for Windows* Targets.

Known Issues and Limitations

On Android* platforms, the default acceleration vehicle in the community OpenCV 3.0 Beta is set to GPU OpenCL and Intel INDE OpenCV inherits this logic. However, this has been changed in a later revision of community OpenCV (by disabling the OpenCL code path for Android*). If such modification becomes part of the community OpenCV 3.0 (“Gold”) release, your application might start behaving differently when switching between the community OpenCV and the Intel INDE OpenCV implementation.

  • If installation of the Microsoft* ImageWatch plug-in for Visual Studio* via the Intel INDE installer fails, install the plug-in separately using the instructions at https://visualstudiogallery.msdn.microsoft.com/e682d542-7ef3-402c-b857-bbfba714f78d
  • For Android* application development, if your application uses the Android OpenCV Manager (http://docs.opencv.org/platforms/android/service/doc/index.html) you need to make your code use specific tag/versioning to load the Intel INDE OpenCV binaries instead. For details refer to Getting Started with Intel INDE-OpenCV for Android* Targets. Not doing this will result in usage of Community OpenCV libraries by default.
  • Using the Intel INDE OpenCV product assumes your application does rely on OpenCV 3.0 API. Even though you may get the existing OpenCV 2.4.x-based application to work fine with this product, such compatibility is not claimed and behavior is not guaranteed. Therefore consider porting your application to 3.0 OpenCV API first. For more details on 3.0 OpenCV API please refer to OpenCV 3.0 Architecture Guide for Intel INDE OpenCV.
  • The additional features added in this product are not part of the community version of OpenCV. In particular, usage of the new functions may result in incompatibility with other OpenCV 3.0 implementations.
  • On the date of this product’ release, the Community OpenCV 3.0 Beta does not offer binaries for ARM-based platforms and this product does not offer them either.
  • Visual Studio* is not supported for Android* development with Intel INDE OpenCV.
  • On Android*, the UMat support is limited to JNI only. JNI stands for Java Native Interface- an application development approach with the C/C++ code communicating with the rest of Java via JNI.
  • The fix implemented in this product removes a data race between oclCleanupCallback and Mat::GetUMat presented in the community OpenCV. However, this might have a negative performance impact depending on the application and system used.
  • On Windows*, Timeout Detection and Recovery (TDR) events may be observed when OpenCL™ execution is involved, especially on workloads with complex, time-consuming kernels. Increase the TDR delay to avoid the TDRs. For details, refer to the article at http://msdn.microsoft.com/en-us/library/windows/hardware/gg487368.aspx
  • For details on known issues with the OpenCL™ standard on the Intel Processor Graphics, refer to the relevant driver release notes.
  • The product supports Intel® Threading Building Blocks (Intel® TBB) 4.3.4 (4.3 update 4). Any standalone Intel TBB package loaded by the application should be of either the same or higher version.
Viewing all 554 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>