Quantcast
Channel: Intel Developer Zone Articles
Viewing all 554 articles
Browse latest View live

OpenCV* 3.0 Architecture Guide for Intel® INDE OpenCV

$
0
0

Introduction

The Intel® Integrated Native Developer Experience (Intel® INDE) is a cross-architecture productivity suite that provides developers with tools, support, and IDE integration to create high-performance C++/Java* applications for Windows* on Intel® architecture and Android* on ARM* and Intel® architecture.

The new OpenCV beta, a feature of Intel INDE, is compatible with the new open source OpenCV 3.0 beta (Open Source Computer Vision Library: http://opencv.org). Provides free binaries for computer vision applications development and production for usages like enhanced photography, augmented reality, video summarization, and more.

Key features of the Intel® INDE OpenCV are:

  • Compatibly with OpenCV 3.0
  • Pre-build and validated binaries, cleared of IP protected building blocks.
  • Easy to use and to maintain with IDE integration for both Windows and Android development.
  • Optimized for Intel® platforms with heterogeneous computing.

This document is focused on the general OpenCV 3.0 Architecture. For the full list of Intel INDE OpenCV features, refer to the separate article.

Introducing OpenCV 3.0

OpenCV 3.0 is new iteration of the now de-facto standard library for vision and image processing. Since its’ Alpha version it introduces important changes in OpenCV architecture. Directly from the changelog:

  • The new technology is nick-named "Transparent API" and, in brief, is extension of classical OpenCV functions, such as cv::resize(), to use OpenCL underneath. See more details about here: T-API

Recently, the OpenCV foundation announced availability of OpenCV 3.0. The Beta version of OpenCV brings significant performance improvements for Intel SoC as discussed in the “Performance Promise of OpenCV 3.0 and INDE-OpenCV” document. Intel INDE OpenCV is based exactly on OpenCV 3.0 sources and contains preview of features that are not part of public OpenCV yet.

The official “Beta” status of the current Intel INDE-OpenCV release actually implies that there might be API changes in the final OpenCV 3.0 (“Gold”) release. Therefore, whenever you start development with the Intel INDE OpenCV 3.0 Beta, you might need to change your code to make sure it works with “Gold” versions of the components.

OpenCV 3.0 Transparent API

New Architecture for Using OpenCL™ with OpenCV

“Transparent API” is a new architecture for using the OpenCL technology with OpenCV 3.0. OpenCV 3.0 resolves a number of issues with the previous usage of OpenCL. With OpenCV 3.0 you can write a single code path that works regardless of the OpenCL presence in the system.

Few design choices support the new architecture:

  1. A unified abstraction cv::UMat that enables the same APIs to be implemented using CPU or OpenCL code, without a requirement to call OpenCL accelerated version explicitly. These functions use an OpenCL-enabled GPU if exists in the system, and automatically switch to CPU operation otherwise.
  2. The UMat abstraction enables functions to be called asynchronously. Unlike the cv::Mat of the OpenCV version 2.x, access to the underlying data for the cv::UMat is performed through a method of class, and not though its data member. Such an approach enables the implementation to explicitly wait for GPU completion only when CPU code absolutely needs the result.
  3. The UMat implementation makes use of CPU-GPU shared physical memory available on Intel SoCs, including allocations that come from pointers passed into OpenCV.

OpenCV 3.0: Transparent Coding Style with UMat

The code samples below illustrate the impact of the Transparent API. The first snippet shows how to process images using the CPU in OpenCV 2.x:

cv::Mat inMat, outMat;
vidInput >> inMat;
cv::cvtColor(inMat, outMat, cv::COLOR_RGB2GRAY);
vidOutput << outMat;

Figure 1.Color conversion using the CPU in OpenCV 2.x

This code reads an image from a capture device (camera or video decoder), converts it to grayscale, and writes the result. In OpenCV 2.x, if you want to do the color conversion on an OpenCL-enabled GPU, the code looks like this:

cv::Mat inMat, outMat;
vidInput >> inMat;
cv::ocl::oclMat inOclMat(inMat);
cv::ocl::oclMat outOclMat;
cv::ocl::cvtColor(inOclMat, outOclMat, cv::COLOR_RGB2GRAY);
outMat = outOclMat;
vidOutput << outMat;

Figure 2. Color conversion using OpenCL in OpenCV 2.x

Notice the very explicit usage of OpenCL in this version of the code. Running this code on a platform that doesn’t support OpenCL will fail, so you have to duplicate code paths if you can’t guarantee OpenCL availability on every target platform.

In OpenCV 3.0, you can write a single code path that works regardless of the OpenCL presence on the target platform. Consider the following example:

cv::UMat inMat, outMat;
vidInput >> inMat;
cv::cvtColor(inMat, outMat, cv::COLOR_RGB2GRAY);
vidOutput << outMat;

Figure 3. Color conversion using either the CPU or OpenCL in OpenCV 3.x

This code looks very similar to the CPU version of the code written for OpenCV 2.x. The only difference is usage of the cv::UMat class instead of cv::Mat, which enables the cvtColor function to use the OpenCL implementation of the color conversion.

UMat Is an Opaque Data Type

Generally, the cv::UMat is the C++ class, which is very similar to cv::Mat. But the actual UMat data can be located in a regular system memory, dedicated video memory, or shared memory. Therefore, the UMat is “opaque” by means of potentially having multiple representations of the data inside. Thus, the UMat class neither exposes any “data” pointers (unlike cv::Mat), nor it provides direct element access. Instead cv::UMat provides methods to map the UMat data into the user-space and “unmap” to let the UMat synchronize its internal data structures (below).

UMat and Mat compatibility

Any OpenCV function that accepts Input/OutputArray(s) also accepts UMat, even if the function has no actual (GPU) OpenCL implementation (in this case an implicit data copy can happen).

If you still need “Mat”s for your legacy code, use UMat::getMat(int access_flags), which smartly maps/unmaps data for the specified CPU access when needed. The getMat method actually creates the cv::Mat object that locks the UMat data. The “parent” UMat data cannot be used, until the “child” Mat object is destroyed.

Similarly, Mat::getUMat()enables you to use results of your legacy (Mat-based) code with the code based on the new (“Transparent”, UMat-based) API. Refer to the next section for the related performance tips. Usage of the “parent” Mat after getUMat() call is undefined until the resulting “child” UMat object is destroyed.

Notice that mixed usage of Mat and UMat is generally discouraged!

Finally, you can also use explicit copy methods:

  • void Mat::copyTo(OutputArray dst);
  • void UMat::copyTo(OutputArray dst);

Porting to OpenCV 3.0: Rule of Thumb

Generally, porting the OpenCV 2.X code to OpenCV 3.0 and the transparent API is trivial – you need to replace Mats with UMats on the performance sensitive path of your code. Since UMat has neither public data members nor element access operations, all errors pop up immediately at compile-time.

Best Known Methods for OpenCV 3.0 Users

Use the cv::UMat Datatype, But Not Everywhere

The key to getting OpenCV functions to use OpenCL in OpenCV 3.0 is to use the cv::UMat data structure for images. Otherwise, in the community OpenCV 3.0, the functions refuse to offload computation. With Intel INDE OpenCV you can hint the Dispatcher to pick up the OpenCL code-path even for the Mat-based code, but using UMat reduces the number of potential behind-the-scene copy operation. For details, see the separate Dispatcher tutorial.

But this doesn’t mean it is performance advantageous to use cv::UMat everywhere. The guidance here is to use cv::UMat for images, and continue to use cv::Mat for other smaller data structures such as  convolution matrices, transpose matrices, and so on.

Prefer the BORDER_REPLICATE Mode when Processing Borders

Border processing always comes with a performance price, as it requires additional logic in the code. Performance impact of using the BORDER_REPLICATE mode is less than the impact of using other border modes. This comes from the ability to clamp the image coordinates with the OpenCL min() and max() functions in this mode, without having to use more general conditional statements.

Use Mat::getUMat() and UMat::getMat() Carefully

The Mat::getUMat() function acquires a UMat alias of an existing Mat data structure. Underneath the covers, it may create an OpenCL buffer handle (using the CL_MEM_USE_HOST_PTR flag). At this point, it is important to forego usage of the original Mat until the new UMat has been properly destroyed. Otherwise, the CPU and OpenCL versions of the data could get out of sync. Similarly, the UMat::getMat() function maps the OpenCL buffer inside the UMat. So programs should not use the UMat object until the Mat returned from getMat() has been destroyed. A way  to manage this is with a pattern like the one shown below:

cv::UMat uTest(height, width, CV_32FC1);
uTest.setTo(0);
 
{
    // Use getMat to verify that the matrix has zeroes. 
    // This locks uTest data until mTest is destroyed
    cv::Mat mTest = uTest.getMat(cv::ACCESS_READ);
 
    // CPU code here
       ...
} // mTest.release() will be called here automatically.
  // This will unlock original uTest.

Figure 4. Preferred getMat/getUMat usage pattern

 

 

 


Intel® INDE Getting Started Guide

$
0
0

Introduction

Intel® Integrated Native Developer Experience (Intel® INDE) is a one stop productivity suite to create native applications targeting the Android*, Microsoft* Windows*, and OS X* platforms. This guide will help you get started with developing high performance native applications using the various features of Intel INDE. 

Developing Windows* Applications

You can use Intel INDE to develop Windows* applications using a Microsoft Windows* host system. For more help on using Intel INDE with this platform, refer to this guide:

Developing Android* Applications

You can use Intel INDE to develop Android* applications for mobile platforms using either a Microsoft Windows* host system or an Apple OS X* host system. As the tools available on each development platform are different, refer to the Getting Started Guide for your development system:

Developing OS X* Applications

You can use Intel INDE to develop OS X* applications for mobile platforms using an Apple OS X* host system. For more information, refer to this guide:

Next Steps...

Start developing apps with Intel INDE! See the Intel INDE Home Page for information about the different versions of the product, how to purchase it, and how to get support for the product.

 

5 Ways to Optimize Your Code for Android 5.0 Lollipop

$
0
0

Download PDF

Introduction

With the release of Android 5.0 Lollipop*, an innovative default runtime environment was introduced, called ART* (short for Android RunTime). It includes a number of enhancements that improve performance. In this paper, we introduce some of the new features in ART, benchmark it against the previous Android Dalvik* runtime, and share five tips for developers that can further improve application performance.

What’s new in ART?

Profiling many Android applications on the Dalvik runtime identified two key pain points for end users: the time it takes to launch an app, and the amount of jank. Jank occurs when an application is stuttering, juddering or simply halting because an app isn't keeping up with the screen refresh rate, and is the result of the frame setup taking too long. A frame is defined as janky when it is much faster or slower than the previous frame. Users see jank as jerky motion which makes the user experience less fluent than users and developers would wish for. To address these issues, there are several new features in ART:

  • Ahead of time compilation: At install time, ART compiles apps using the on-device dex2oat tool and generates a compiled app executable for the target device. By comparison, Dalvik used an interpreter and just-in-time compilation, converting an APK into optimized dex byte code at installation time, and further compiling the optimized dex bytecode into native machine code for hot paths when the application is run. The result is that applications launch faster under ART, although the price is that they take longer to install. Applications also use more flash memory space on the device under ART because the code compiled at install time takes up extra space.
  • Improved memory allocation: Applications that need to allocate memory intensively might have experienced sluggish performance on Dalvik. A separate large object space and improvements in the memory allocator help to alleviate this.
  • Improved garbage collection: ART has faster and more parallel garbage collection, resulting in less fragmentation and better use of memory.
  • Improved JNI performance. Optimized JNI invoke and return code sequences reduce the number of instructions used to make JNI calls.
  • 64-bit support: ART makes good use of 64 bit architectures, improving the performance of many applications when run on 64-bit hardware.

Together, these features improve the user experience of applications written using the Android SDK alone, as well as applications that make lots of JNI calls. Users may also benefit from longer battery life because applications compile only once and execute faster, and consume less power during routine use as a result.

Comparing performance in ART and Dalvik

When ART was first released as a preview on Android KitKat 4.4, there was some criticism of its performance. That wasn’t a fair comparison because an early preview version of ART was being compared to the fully matured and optimized Dalvik, with the result that some applications ran slower under ART than under Dalvik.

We now have an opportunity to compare the consumer-ready version of ART against Dalvik. Because ART is the only runtime in Android 5.0, a side by side comparison of Dalvik and ART is only possible if you compare devices that have recently been updated from Android KitKat 4.4 to Android Lollipop 5.0. For this paper, we have conducted tests using the TrekStor SurfTab xintron i7.0* tablet with an Intel® AtomTM processor, initially with Android 4.4.4 running Dalvik, and then updated to Android 5.0 running ART.

Since we are comparing different versions of Android it is possible that some of the improvements that we see come from Android 5.0 Improvements other than ART, but based on our internal performance analysis we found that ART is the cause of most of the improvements.

We ran benchmarks where Dalvik’s ability to aggressively optimize code that is repeatedly executed might be expected to give it an advantage, as well as Intel’s own gaming simulation.

Our data shows that ART outperforms Dalvik on the five benchmarks we tested, in some cases significantly.

Relative Lollipop-ART to KitKat-Dalvik performance

For more information on these benchmarks, see these links:

IcyRocks version 1.0 is a workload developed by Intel to mimic real-world gaming applications. It uses the open source Cocos2d* library along with JBox2D* (A Java Physics Engine) for most of its computations. It measures the average number of animations (frames) it can render per second (FPS) at various load levels and then computes the final metric by taking a geometric mean of the FPS at these various load levels. It also measures the degree of jank (jank per second), which is the mean of janky frames per second at various load levels. It shows improved performance in ART compared to Dalvik:

Relative ICY Rocks  animations/second
IcyRocks version 1.0 also shows that ART renders frames more consistently than Dalvik, with less jank and thus a smoother user experience.

Relative ICY Rocks  JANK/second
Based on this performance evaluation, it is clear that ART is already delivering a better user experience and better performance than Dalvik.

Moving code from Dalvik to ART

The transition from Dalvik to ART is transparent and most applications that run on Dalvik should run on ART without requiring modification. As a result, many applications will see a performance improvement when users upgrade to the new runtime. It’s still a good idea to test your application with ART, especially if it uses the Java Native Interface, as ART’s JNI error handling is stricter than Dalvik’s, as explained in this article.

Five tips for optimizing your code

Most applications will experience a performance increase as a result of the improvements in ART detailed above. Additionally, there are several practices you can adopt that may help to optimize your application further for ART. For each technique below, we’ve provided some simplified code to illustrate how it works.

Because all applications differ and the resulting performance depends so much on the surrounding code and context, it’s not possible to provide an indication of the performance increase you can expect. However, we will explain why these techniques increase performance, and we recommend that you test them in the context of your own code to see how they affect your performance.

The tips we provide here are broadly applicable, but in the case of ART, the dex2oat compiler that generates binary executable code from a dex file will implement these optimizations.

Tip #1 – Use local variables instead of public class fields when possible.

By limiting the scope of variables, you can not only make your code more readable and less error-prone, but also more optimization-friendly.

In the unoptimized code below, the value of v is calculated when the application runs. That’s because v is accessible from outside the method and can be changed by any code, so its value is not known at compilation time. The compiler doesn’t know whether the some_global_call() operation changes v or not, because v can be changed from outside the method by any code.

In the optimized code, v is a local variable and its value can be calculated at compilation time. As a result, the compiler can put the result directly into the code and avoid the calculation at runtime.

Unoptimized code

class A {
  public int v = 0;

  public int m(){
    v = 42;
    some_global_call();
    return v*3;
  }
}

Unoptimized code

class A {
  public int m(){
    int v = 42;
    some_global_call();
    return v*3;
  }
}

Tip #2 – Use the final keyword to hint that a value is constant

The final keyword can be used to protect your code from accidentally modifying variables that should be constant, but can also improve performance by giving the compiler a hint that a value is constant.

In the unoptimized code below, the value of v*v*v must be calculated at runtime, because the value of v could change. In the optimized code, using the keyword final when assigning a value to v tells the compiler that this value won’t change, so the calculation can be performed during compilation and the result can be added into the code, removing the need to calculate it at runtime.

Unoptimized code

class A {
  int v = 42;

  public int m(){
    return v*v*v;
  }
}

Optimized code

class A {
  final int v = 42;

  public int m(){
    return v*v*v;
  }
}

Tip #3 – Use the final keyword for class and method definitions

Because all methods in Java are potentially polymorphic, declaring a method or class as final tells the compiler that the method is not redefined in any subclass.

In the unoptimized code below, m() must be resolved before making the call.

In the optimized code, because the method m() was declared as final, the compiler knows which version of m() will be called. As a result, it can avoid method look-up and inline the call, replacing the call to m() with the contents of its method. This results in a performance increase.

Unoptimized code

class A {
  public int m(){
    return 42;
  }
  public int f(){
    int sum = 0;
    for (int i = 0; i < 1000; i++)
      sum += m(); // m must be resolved before making a call
    return sum;
  }
}

Optimized code

class A {
  public final int m(){
    return 42;
  }
  public int f(){
    int sum = 0;
    for (int i = 0; i < 1000; i++)
      sum += m();
    return sum;
  }
}

Tip #4 – Avoid JNI calls for small methods.

There are good reasons to use JNI calls, such as when you have a C/C++ codebase or library to reuse, you need a cross-platform implementation, or you need increased performance. But it’s important to minimize the number of JNI calls, because each one carries a significant overhead. When JNI calls are used to optimize performance, this overhead can result in not realizing the expected benefits. In particular, frequently calling short JNI methods can be counter-productive, and putting JNI calls in a loop can amplify the overhead.

Code example

class A {
  public final int factorial(int x){
    int f = 1;
    for (int i =2; i <= x; i++)
      f *= i;
    return f;
  }
  public int compute (){
    int sum = 0;
    for (int i = 0; i < 1000; i++)
      sum += factorial(i % 5);
// if we used the JNI version of factorial() here
// it would be noticeably slower, because it is in a loop
// and the loop amplifies the overhead of the JNI call
    return sum;
  }
}

Tip #5 – Use standard libraries instead of implementing the same functionality in your own code

Standard Java libraries are highly optimized and often use internal Java mechanisms to get the best possible performance. They might work significantly faster than when the same functionality is implemented in your own application code. Attempts to avoid the overhead of calling a standard library might actually result in lower performance. In the unoptimized code below, there is custom code to avoid calling Math.abs(). However, the code that uses Math.abs() works faster because Math.abs() is replaced by an optimized internal implementation in ART at compile time.

Unoptimized code

class A {
  public static final int abs(int a){
    int b;
    if (a < 0)
      b = a;
    else
      b = -a;
    return b;
  }
}

Optimized code

class A {
  public static final int abs (int a){
    return Math.abs(a);
  }
}

Intel optimizations in ART

Intel worked with OEMs to provide an optimized version of Dalvik that provides better performance on Intel processors. Intel is making the same investment in ART, so performance will further increase on the new runtime.  Optimizations will be made available through the Android Open Source Project (AOSP), and/or directly through device manufacturers. The optimizations will, as before, be transparent to developers and users so there will be no need to update applications to benefit.

Find out more

To find out more about optimizing your Android applications for Intel processors, and to discover Intel® compilers, visit the Intel Developer Zone at https://software.intel.com.

About the Author

Anil Kumar has been at Intel Corporation for more than 15 years, playing various roles in the Software and Services Group. He is currently Sr. Staff S/W Performance Architect and plays active roles in Java eco-system by contributing to standards organizations, several benchmarks (SPECjbb*, SPECjvm2008, SPECjEnterprise2010 etc.), customer applications by enabling better user experience and resource utilization, and default performance for h/w and s/w configurations.

Daniil Sokolov is a senior software engineer in the Intel Software and Services Group. Daniil has focused on various aspects of Java performance for the last 7 years. He currently works on improving User Experience and Java Application performance on Intel Android devices.

Xavier Hallade is Developer Evangelist at Intel Software and Services Group in Paris, France, where he works on a wide range of Android frameworks, libraries and applications, helping developers to improve their support for new hardware and technologies.
He's also a Google Developer Expert in Android, with a focus on the Android NDK and Android TV.

Correlate Android Logcat Messages with VTune Amplifier Timeline

$
0
0

Introduction

      Android logcat is a very powerful tool for debugging. With Android logcat, we can see lots of useful information from the system or the applications over the time. Android provides the standardized API for the logs. In our development, we can easily add the logs and use logcat to see the logs from our program. VTune™ Amplifier for systems is a profiling tool for system or application performance tuning. VTune Amplifier provides a powerful timeline pane to help the developer to see the performance matrix over the time.

      How to correlate the logcat message with VTune Amplifier timeline pane? It’s reasonable for developers to view the logs from logcat together with the other performance matrix from VTune Amplifier timeline pane. By doing this, we can know what happens in a particular time, and the correlated performance data around that time.

      Intel® VTune™ Amplifier can process and integrate performance statistics collected externally with a custom collector or with your target application in parallel with the native VTune Amplifier analysis. To achieve this, provide the collected custom data as a csv file with a predefined structure and load this file to the VTune Amplifier result.

      The details on how to create a CSV file with external data are described in the VTune Amplifier user’s guide. Open the VTune Amplifier Help documentation, go to Intel VTune™ Amplifier > User’s Guide > External Data Import, we can find how to create the CSV file with external data and the examples of the CSV file format. In order to view the logcat message in the VTune timeline pane, follow the guide to change the logcat message into the CSV file, load the message in VTune Amplifier.

An Example

Here is an example. I have a JAVA application “com.example.Thread1”. In my application, I have a function which contains lots of computation. The pseudo code looks like below:

void myfunction ()
{
        Log.v(“MYTEST”, “start block 1”);
        {
		...//my computation block 1
	}
        Log.v(“MYTEST”, “start block 2”);
        {
		...//my computation block 2
	}
	Log.v(“MYTEST”, “my computation finished”);
}

    In the Vtune Amplifier timeline pane shown as below, we can see that this function is executed for 6 times. The main thread ID for my application is 12271. The brown column shows the performance data (the CPU time) we collected during the VTune Amplifier profiling.

For each execution of the function, if we collect the logcat message with “logcat -v threadtime”, we could have something like below:

01-12 11:13:19.090  2174  2174 V MYTEST  : start block 1
01-12 11:13:19.260  2174  2174 V MYTEST  : start block 2
01-12 11:13:19.500  2174  2174 V MYTEST  : my computation finished

Now we can convert the message from logcat into the csv file with the format which can be loaded into VTune Amplifier. According to the documentation from VTune Amplifier, we could have the following csv file for example:

name,start_tsc.UTC,end_tsc,pid,tid
V/MYTEST : start block 1,2015-01-12 03:13:19.090,2015-01-12 03:13:19.090,2174,2174
V/MYTEST : start block 2,2015-01-12 03:13:19.260,2015-01-12 03:13:19.260,2174,2174
V/MYTEST:my computation finished,2015-01-12 03:13:19.500, 2015-01-12 03:13:19.500, 2174, 2174

Here we use log tag and message string as the “name”. We use the time of the message as both the start_tsc.UTC and end_tsc, use the process ID and thread ID from logcat as pid and tid. The fields are separated by comma.

From VTune Amplifier “Analysis Type” > “Import from CSV”, select the csv file we created, VTune Amplifier will load the data and show the correlated logcat message with the performance data over the timeline. See below screenshot from the output, move the mouse over the message point (the small yellow triangle), we can see each message from logcat for our application.

There are several tips and tricks for creating the CSV file.

  1. The time from logcat is the time from the Andriod OS which is related to a specific time zone. This time needs to be changed to UTC time in the CSV file which is accepted by VTune Amplifier.
  2. The CSV file name should specify the hostname where your custom collector gathered the data. E.g. [user-defined]-hostname-<hostname-of-system>.csv. For Android target, you may get the host name by reading the file /proc/sys/kernel/hostname from the adb shell.
  3. You can custom the string in the “name” column in the CSV file. In screenshot shown above, I simply use the whole logcat message line as the “name”. Please note that you will need to remove the comma in the “name” string if it exists in the logcat message. The comma “,” is the reserved separator which should not appear in the name string.
  4. You may also convert the kernel messages (e.g. from “dmesg”) into CSV file and see the kernel logs in the VTune Amplifier timeline pane. This is very useful for the system level developer, e.g. when you are developing your kernel module or device driver. The process ID and thread ID should be specified to 0 for the messages from kernel. Please note that the kernel message timestamp (e.g. from dmesg) is the offset of seconds from the time when system is booting up. You need to convert the timestamp into the UTC time when the message is generated. For example, using this way to get the message time: “time of message” = “now” - “system uptime” + “timestamp of the message”. Here “now” is the current time of the system. You may get the “now” time with “date” command from adb shell. The “system uptime” is the passed seconds since the system is booting up. You can get this date from file /proc/uptime.

 

A Script

I have created an experimental bash script to make things easier. You can use the script “logcat2vtune.sh” to collect the logcat data and generate the needed CSV file automatically. The script can collect the logcat and/or kernel message, read the target system information, parse the logs and convert it to the CSV file automatically.

To use this script file, you need a bash environment which is available natively on a Linux host. If you are working on a Windows host, I would recommend to install the Cygwin* which provides a Linux style bash environment on Windows system. Here are the basic steps to get the CSV file during the VTune Amplifier profiling.

  1. In the shell command, make sure you can access the “adb”, start the logcat2vtune.sh script with proper option, e.g. > ./logcat2vtune.sh -c logcat -g MYTEST
  2. Start the VTune Amplifier performance data collection. This can be done from VTune Amplifier GUI or from command line.
  3. Stop the VTune Amplifier performance data collection.
  4. Press any key in the logcat2vtune.sh shell windows to stop the log collections. The script will read the collected log data, parse the logs with bash regular expression and convert it into a CSV file. You will find a .csv file created in the current shell folder.
  5. Load the CSV file into VTune Amplifier and examine the log messages from VTune timeline view.

Here are some typical usage modes for using the script file.

$logcat2vtune.sh -c logcat -g MYTEST

Collect the logcat data, filter the logcat message with string “MYTEST” and generate the CSV file. The logcat is collected using the following command internally in scripts.
     $adb shell logcat -v threadtime
“MYTEST” is the string to filter the logcat messages. Using the filter is highly recommended since logcat may get very large logs while we only care the logs from our own process. The filter could be the string of my logcat tag name, the process ID, the thread ID or any other string. You can use comma “,” to specify multiple strings. The logs which match any string from “-g” option will be parsed and generated in the CSV file.

$logcat2vtune.sh -c dmesg -g “MYDRIVER”

Collect the “dmesg” output, filter it with “MYDRIVER” string and generate the CSV file.

$logcat2vtune.sh -c logcatk -g MYTEST,MYDRIVER

Collect both logcat data and kernel message data, filter by string “MYTEST” or “MYDRIVER” and generate the CSV file. In this case, you will find both user level logs from logcat and kernel level logs from “dmesg” in the VTune Amplifier timeline pane. The logs are collected internally from script using following command:
>adb shell logcat -v threadtime -f /dev/kmsg | adb shell cat /proc/kmsg
In this case, you can see the kernel logs from vmlinux with TID 0, and the user level logs for TID 1922from logcat.


 

Please use command “logcat2vtune.sh -h” to get more details on the script usage. You can customize the script for your own purpose. Please note that this script is an experimental script which is not fully validated. Please feel free to let me know if you get any issues when using this script.

 

This article applies to:
Products: Intel(R) System Studio 2015, Intel System Studio 2016
Host OS/platforms: Windows (IA-32, Intel(R) 64), Linux* (IA32, Intel(R) 64)
Target OS/platforms: Android*

Getting Intel® Mobile Development Kit working with Nexus Player (FUGU)

$
0
0

Configuration used for this walk-through

Android development tools and environments are in a constant state of flux, an attempt has been made here to provide sufficient links to reference material to enable one to accomplish the desired results using a different setup, however this is the configuration used for this walkthrough.

  • Commercially purchased FUGU device with Android 5.1.0 image LMY47D*
  • Linux system running Ubuntu 12 with internet access

*Factory image 5.1.0 LMY47D should be flashable to any commercially purchased FUGU device

Building a rooted boot image

In order to enable full functionality of the MDK tools it is necessary to have root access to the device, which requires building the boot.img image yourself.  Although the steps included will also result in building a system.img and recovery.img these appear to be unnecessary at this time for getting the MDK tools working.

Follow the instructions for initializing the build environment and downloading the source as located on the source.google site:  http://source.android.com/source/downloading.html

As suggested per the instructions I located the latest branch for checkout and build, which at the time was LMY47D or android-5.1.0_r1 for the fugu device. So once all appropriate packages are installed the pertinent command sequence from within the directory where you wish to build the source code is:

  • repo init -u https://android.googlesource.com/platform/manifest -b android-5.1.0_r1
  • repo sync -j5
  • source build/envsetup.sh
  • lunch full_fugu-userdebug
  • make -j8

Note that the lunch command may also be run without parameters and the appropriate option selected from the menu, this may be necessary for later kernel versions and the keyword is likely to change.  Of importance is that you are building a FUGU device, and you want a USERDEBUG build option.

At this point hopefully the build is successful and there should be several files including the needed boot.img file in the out/target/product/fugu/ directory.

Rooting the Device

Now that you have build a boot image it's necessary to flash it onto the device.  Connect it via a USB cable and turn it on, then verify your connection with "adb devices" to ensure your device is listed.  If not then you may need to turn on developer options in your existing image and enable USB debug.

At this point a fairly simple sequence of commands should enable you to flash your new boot image to the device.

  • adb reboot bootloader
  • fastboot oem unlock
  • fastboot flash boot out/target/product/fugu/boot.img
  • fastboot oem lock
  • fastboot continue

 

Ready to Go!

You are now ready to go!  You should be able to:

 

Recovering from Disaster

If something should go wrong, or perhaps you simply wish to return your device to a factory default un-rooted image this is possible as well.  Factory images are located here: https://developers.google.com/android/nexus/images and include instructions and run scripts for as simple a process as can be imagined.  In fact the device I used to develop these instructions came with Android 5.0 with which this process did not work as desired, but by using these factory images I updated it to 5.1.0 (LMY47D) and then the procedure worked like a charm.

Using Intel® System Studio in a Virtual Machine Environment

$
0
0

Intel® System Studio and many of its components can be used for software development, analysis and debug targeting workloads running inside virtualized guest OSs. In many ways developing for a virtualized environment is only an extension of the concept of cross-development.

For compilers and libraries this implies that they can be used either in cross-build fashion or as a native compiler installed as part of your guest OS. Here as usual the expectation is that a GNU toolchain is present that the Intel® C++ Compiler can integrate with.

The Intel-enhanced GDB* application debugger can be used to debug locally inside a virtual machine or remotely using TCP-IP forwarding into the guest OS with a gdbserver debug agent running locally.

System-Visible Event Nexus (SVEN) instrumentation also has no strong dependency on hardware and thus can be used inside a Guest OS. The only dependency is access to a reliable OS timer signal.

The use of the Intel® VTune™ Amplifier for Systems poses the most complex challenge with some features available with virtualization and some having limitations or not being available. Therefore a considerable part of this whitepaper will focus on the use of the VTune™ Amplifier for Systems.

Two general limitations currently apply to Intel® System Studio and its use in a virtual environment. It does not actively support analysis and debug of workloads that are distributed across multiple guest OSs. Our Intel® System Debugger solution currently also does not support JTAG assisted debug of a guest OS running inside a virtual machine.

The attached whitepaper covers the limitations and capabilities in some detail.

Intel® XDK FAQs - General

$
0
0
Q1: How can I get started with Intel XDK?

There are plenty of videos and articles that you can go through here to get started. You could also start with some of our demo apps that you think fits your app idea best and learn or take parts from multiple apps.

Having prior understanding of how to program using HTML, CSS and JavaScript* is crucial to using Intel XDK. Intel XDK is primarily a tool for visualizing, debugging and building an app package for distribution.

You can do the following to access our demo apps:

  • Select Project tab
  • Select "Start a New Project"
  • Select "Samples and Demos"
  • Create a new project from a demo

If you have specific questions following that, please post it to our forums.

Q2: Can I use an external editor for development in Intel® XDK?

Yes, you can open your files and edit them in your favorite editor. However, note that you must use Brackets* to use the "Live Layout Editing" feature. Also, if you are using App Designer (the UI layout tool in Intel XDK) it will make many automatic changes to your index.html file, so it is best not to edit that file externally at the same time you have App Designer open.

Some popular editors among our users include:

  • Sublime Text* (Refer to this article for information on the Intel XDK plugin for Sublime Text*)
  • Notepad++* for a lighweight editor
  • Jetbrains* editors (Webstorm*)
  • Vim* the editor
Q3: How do I get code refactoring capability in Brackets*, the code editor in Intel® XDK?

You will have to add the "Rename JavaScript* Identifier" extension and "Quick Search" extension in Brackets* to achieve some sort of refactoring capability. You can find them in Extension Manager under File menu.

Q4: Why doesn’t my app show up in Google* play for tablets?

...to be written...

Q5: What is the global-settings.xdk file and how do I locate it?

global-settings.xdk contains information about all your projects in the Intel XDK, along with many of the settings related to panels under each tab (Emulate, Debug etc). For example, you can set the emulator to auto-refresh or no-auto-refresh. Modify this file at your own risk and always keep a backup of the original!

You can locate global-settings.xdk here:

  • Mac OS X*
    ~/Library/Application Support/XDK/global-settings.xdk
  • Microsoft Windows*
    %LocalAppData%\XDK
  • Linux*
    ~/.config/XDK/global-settings.xdk

If you are having trouble locating this file, you can search for it on your system using something like the following:

  • Windows:
    > cd /
    > dir /s global-settings.xdk
  • Mac and Linux:
    $ sudo find / -name global-settings.xdk
Q6: When do I use the intelxdk.js, xhr.js and cordova.js libraries?

The intelxdk and xhr libraries are only needed with legacy build tiles. The Cordova* library is needed for all. When building with Cordova* tiles, intelxdk and xhr libraries are ignored and so they can be omitted.

Q7: What is the process if I need a .keystore file?

Please send an email to html5tools@intel.com specifying the email address associated with your Intel XDK account in its contents.

Q8: How do I rename my project that is a duplicate of an existing project?

Make a copy of your existing project directory and delete the .xdk and .xdke files from them. Import it into Intel XDK using the ‘Import your HTML5 Code Base’ option and give it a new name to create a duplicate.

Q9: How do I try to recover when Intel XDK won't start or hangs?
  • If you are running Intel XDK on Windows* it must be Windows* 7 or higher. It will not run reliably on earlier versions.
  • Delete the "project-name.xdk" file from the project directory that Intel XDK is trying to open when it starts (it will try to open the project that was open during your last session), then try starting Intel XDK. You will have to "import" your project into Intel XDK again. Importing merely creates the "project-name.xdk" file in your project directory and adds that project to the "global-settings.xdk" file.
  • Rename the project directory Intel XDK is trying to open when it starts. Create a new project based on one of the demo apps. Test Intel XDK using that demo app. If everything works, restart Intel XDK and try it again. If it still works, rename your problem project folder back to its original name and open Intel XDK again (it should now open the sample project you previously opened). You may have to re-select your problem project (Intel XDK should have forgotten that project during the previous session).
  • Clear Intel XDK's program cache directories and files.
    On a [Windows*] machine this can be done using the following on a standard command prompt (administrator not required):
    > cd %AppData%\..\Local\XDK
    > del *.* /s/q
    To locate the "XDK cache" directory on [OS X*] and [Linux*] systems, do the following:
    $ sudo find / -name global-settings.xdk
    $ cd <dir found above>
    $ sudo rm -rf *
    You might want to save a copy of the "global-settings.xdk" file before you delete that cache directory and copy it back before you restart Intel XDK. Doing so will save you the effort of rebuilding your list of projects. Please refer to this question for information on how to locate the global-settings.xdk file.
  • If you save the "global-settings.xdk" file and restored it in the step above and you're still having hang troubles, try deleting the directories and files above, along with the "global-settings.xdk" file and try it again.
  • Do not store your project directories on a network share (Intel XDK currently has issues with network shares that have not yet been resolved). This includes folders shared between a Virtual machine (VM) guest and its host machine (for example, if you are running Windows* in a VM running on a Mac* host). This network share issue is a known issue with a fix request in place.

Please refer to this post for more details regarding troubles in a VM. It is possible to make this scenario work but it requires diligence and care on your part.

  • There have also been issues with running behind a corporate network proxy or firewall. To check them try running Intel XDK from your home network where, presumably, you have a simple NAT router and no proxy or firewall. If things work correctly there then your corporate firewall or proxy may be the source of the problem.
  • Issues with Intel XDK account logins can also cause Intel XDK to hang. To confirm that your login is working correctly, go to the Intel XDK App Center and confirm that you can login with your Intel XDK account. While you are there you might also try deleting the offending project(s) from the App Center.

If you can reliably reproduce the problem, please send us a copy of the "xdk.log" file that is stored in the same directory as the "global-settings.xdk" file to mailto:html5tools@intel.com.

Q10: Is Intel XDK an open source project? How can I contribute to the Intel XDK community?

No, It is not an open source project. However, it utilizes many open source components that are then assembled into Intel XDK. While you cannot contribute directly to the Intel XDK integration effort, you can contribute to the many open source components that make up Intel XDK.

The following open source components are the major elements that are being used by Intel XDK:

  • Node-Webkit
  • Chromium
  • Ripple* emulator
  • Brackets* editor
  • Weinre* remote debugger
  • Crosswalk*
  • Cordova*
  • App Framework*
Q11: How do I configure Intel XDK to use 9 patch png for Android* apps splash screen?

Intel XDK does support the use of 9 patch png for Android* apps splash screen. You can read up more at http://developer.android.com/tools/help/draw9patch.html on how to create a 9 patch png image. We also plan to incorporate them in some of our sample apps to illustrate their use.

Q12: How do I stop AVG from popping up the "General Behavioral Detection" window when Intel XDK is launched?

You can try adding nw.exe as the app that needs an exception in AVG.

Q13: What do I specify for "App ID" in Intel XDK under Build Settings?

Your app ID uniquely identifies your app. For example, it can be used to identify your app within Apple’s application services allowing you to use things like in-app purchasing and push notifications.

Here are some useful articles on how to create an App ID for your

iOS* App

Android* App

Windows* Phone 8 App

Q14: Is it possible to modify Android* Manifest through Intel XDK?

You cannot modify the AndroidManifest.xml file directly with our build system, as it only exists in the cloud. However, you may do so by creating a dummy plugin that only contains a plugin.xml file which can then be add to the AndroidManifest.xml file during the build process. In essence, you need to change the plugin.xml file of the locally cloned plugin to include directives that will make those modifications to the AndroidManifext.xml file. Here is an example of a plugin that does just that:

<?xml version="1.0" encoding="UTF-8"?><plugin xmlns="http://apache.org/cordova/ns/plugins/1.0" id="com.tricaud.webintent" version="1.0.0"><name>WebIntentTricaud</name><description>Ajout dans AndroidManifest.xml</description><license>MIT</license><keywords>android, WebIntent, Intent, Activity</keywords><engines><engine name="cordova" version=">=3.0.0" /></engines><!-- android --><platform name="android"><config-file target="AndroidManifest.xml" parent="/manifest/application"><activity android:configChanges="orientation|keyboardHidden|keyboard|screenSize|locale" android:label="@string/app_name" android:launchMode="singleTop" android:name="testa" android:theme="@android:style/Theme.Black.NoTitleBar"><intent-filter><action android:name="android.intent.action.SEND" /><category android:name="android.intent.category.DEFAULT" /><data android:mimeType="*/*" /></intent-filter></activity></config-file></platform></plugin>

You can check the AndroidManifest.xml created in the apk, using the apktool with the command line:  

aapt l -M appli.apk >text.txt  

This adds the list of files of the apk and details of the AndroidManifest.xml to text.txt.

Q15: How can I share my Intel XDK app build?

You can send a link to your project via an email invite from your project settings page. However, a login to your account is required to access the file behind the link. Alternatively, you can download the build from the build page, onto your workstation, and push that built image to some location from which you can send a link to that image. 

Q16: Why does my iOS build fail when I am able to test it successfully on a device and the emulator?

Common reasons include:

  • Your App ID specified in the project settings do not match the one you specified in Apple's developer portal.
  • The provisioning profile does not match the cert you uploaded. Double check with Apple's developer site that you are using the correct and current distribution cert and that the provisioning profile is still active. Download the provisioning profile again and add it to your project to confirm.
  • In Project Build Settings, your App Name is invalid. It should be modified to include only alpha, space and numbers.
Q17: How do I add multiple domains in Domain Access? 

Here is the primary doc source for that feature.

If you need to insert multiple domain references, then you will need to add the extra references in the intelxdk.config.additions.xml file. This StackOverflow entry provides a basic idea and you can see the intelxdk.config.*.xml files that are automatically generated with each build for the <access origin="xxx" /> line that is generated based on what you provide in the "Domain Access" field of the "Build Settings" panel on the Project Tab. 

Q18: How do I build more than one app using the same Apple developer account?

On Apple developer, create a distribution certificate using the "iOS* Certificate Signing Request" key downloaded from Intel XDK Build tab only for the first app. For subsequent apps, reuse the same certificate and import this certificate into the Build tab like you usually would.

Q19: How do I include search and spotlight icons as part of my app?

Please refer to this article in the Intel XDK documentation. Create an intelxdk.config.additions.xml file in your top level directory (same location as the other intelxdk.*.config.xml files) and add the following lines for supporting icons in Settings and other areas in iOS*.

<!-- Spotlight Icon --><icon platform="ios" src="res/ios/icon-40.png" width="40" height="40" /><icon platform="ios" src="res/ios/icon-40@2x.png" width="80" height="80" /><icon platform="ios" src="res/ios/icon-40@3x.png" width="120" height="120" /><!-- iPhone Spotlight and Settings Icon --><icon platform="ios" src="res/ios/icon-small.png" width="29" height="29" /><icon platform="ios" src="res/ios/icon-small@2x.png" width="58" height="58" /><icon platform="ios" src="res/ios/icon-small@3x.png" width="87" height="87" /><!-- iPad Spotlight and Settings Icon --><icon platform="ios" src="res/ios/icon-50.png" width="50" height="50" /><icon platform="ios" src="res/ios/icon-50@2x.png" width="100" height="100" />

For more information related to these configurations, visit http://cordova.apache.org/docs/en/3.5.0/config_ref_images.md.html#Icons%20and%20Splash%20Screens.

For accurate information related to iOS icon sizes, visit https://developer.apple.com/library/ios/documentation/UserExperience/Conceptual/MobileHIG/IconMatrix.html

NOTE: The iPhone 6 icons will only be available if iOS* 7 or 8 is the target.

Cordova iOS* 8 support JIRA tracker: https://issues.apache.org/jira/browse/CB-7043

Q20: Does Intel XDK support Modbus TCP communication?

No, since Modbus is a specialized protocol, you need to write either some JavaScript* or native code (in the form of a plugin) to handle the Modbus transactions and protocol.

Q21: How do I sign an Android* app using an existing keystore?

Uploading an existing keystore in Intel XDK is not currently supported but you can send an email to html5tools@intel.com with this request. We can assist you there.

Q22: How do I build separately for different Android* versions?

Under the Projects Panel, you can select the Target Android* version under the Build Settings collapsible panel. You can change this value and build your application multiple times to create numerous versions of your application that are targeted for multiple versions of Android*.

Q23: How do I display the 'Build App Now' button if my display language is not English?

If your display language is not English and the 'Build App Now' button is proving to be troublesome, you may change your display language to English which can be downloaded by a Windows* update. Once you have installed the English language, proceed to Control Panel > Clock, Language and Region > Region and Language > Change Display Language.

Q24: How do I update my Intel XDK version?

When an Intel XDK update is available, an Update Version dialog box lets you download the update. After the download completes, a similar dialog lets you install it. If you did not download or install an update when prompted (or on older versions), click the package icon next to the orange (?) icon in the upper-right to download or install the update. The installation removes the previous Intel XDK version.

Q25: How do I import my existing HTML5 app into the Intel XDK?

If your project contains an Intel XDK project file (<project-name>.xdk) you should use the "Open an Intel XDK Project" option located at the bottom of the Projects List on the Projects tab (lower left of the screen, round green "eject" icon, on the Projects tab). This would be the case if you copied an existing Intel XDK project from another system or used a tool that exported a complete Intel XDK project.

If your project does not contain an Intel XDK project file (<project-name>.xdk) you must "import" your code into a new Intel XDK project. To import your project, use the "Start a New Project" option located at the bottom of the Projects List on the Projects tab (lower left of the screen, round blue "plus" icon, on the Projects tab). This will open the "Samples, Demos and Templates" page, which includes an option to "Import Your HTML5 Code Base." Point to the root directory of your project. The Intel XDK will attempt to locate a file named index.html in your project and will set the "Source Directory" on the Projects tab to point to the directory that contains this file.

If your imported project did not contain an index.html file, your project may be unstable. In that case, it is best to delete the imported project from the Intel XDK Projects tab ("x" icon in the upper right corner of the screen), rename your "root" or "main" html file to index.html and import the project again. Several components in the Intel XDK depend on this assumption that the main HTML file in your project is named index.hmtl. See Introducing Intel® XDK Development Tools for more details.

It is highly recommended that your "source directory" be located as a sub-directory inside your "project directory." This insures that non-source files are not included as part of your build package when building your application. If the "source directory" and "project directory" are the same it results in longer upload times to the build server and unnecessarily large application executable files returned by the build system. See the following images for the recommended project file layout.

Q26: I am unable to login to App Preview with my Intel XDK password.

On some devices you may have trouble entering your Intel XDK login password directly on the device in the App Preview login screen. In particular, sometimes you may have trouble with the first one or two letters getting lost when entering your password.

Try the following if you are having such difficulties:

  • Reset your password, using the Intel XDK, to something short and simple.

  • Confirm that this new short and simple password works with the XDK (logout and login to the Intel XDK).

  • Confirm that this new password works with the Intel Developer Zone login.

  • Make sure you have the most recent version of Intel App Preview installed on your devices. Go to the store on each device to confirm you have the most recent copy of App Preview installed.

  • Try logging into Intel App Preview on each device with this short and simple password. Check the "show password" box so you can see your password as you type it.

If the above works, it confirms that you can log into your Intel XDK account from App Preview (because App Preview and the Intel XDK go to the same place to authenticate your login). When the above works, you can go back to the Intel XDK and reset your password to something else, if you do not like the short and simple password you used for the test.

Q27: How do I completely uninstall the Intel XDK from my system?

See the instructions in this forum post: https://software.intel.com/en-us/forums/topic/542074. Then download and install the latest version from http://xdk.intel.com.

Q28: Is there a tool that can help me highlight syntax issues in Intel XDK?

Yes, you can use the various linting tools that can be added to the Brackets editor to review any syntax issues in your HTML, CSS and JS files. Go to the "File > Extension Manager..." menu item and add the following extensions: JSHint, CSSLint, HTMLHint, XLint for Intel XDK. Then, review your source files by monitoring the small yellow triangle at the bottom of the edit window (a green check mark indicates no issues).

Back to FAQs Main

Intel® Media for Mobile Getting Started Guide

$
0
0

Media for Mobile Getting Started Guide

Introduction

Media for Mobile, a feature of  Intel® Integrated Native Developer Experience (Intel® INDE) is  a set of easy to use components and API for a wide range of media scenarios. It contains several complete pipelines for most popular use cases and enables you to add your own components to those pipelines.

The samples demonstrate how to incorporate the Media for Mobile into various applications for Android*, iOS* and Windows* RT.

Download the Media for Mobile samples from:

Media for Mobile Samples on GitHub

Media for Mobile Samples

Supported Host/Target Operating Systems: Android*, Windows* and OS X*

To know more about the supported system requirements and list of available target samples, click here.

The following tutorials guide you in your first steps with pre-built samples on...

Other tutorials that will help you to get started with the Media for Mobile available at this page.

Media for Mobile is available for free with the Intel® INDE Starter Edition.Other tutorials that will help you to get started with the Media for Mobile are available. Click here for more information.

Please see the list of open issues in Media for Mobile samples repository here.


Intel® System Studio 2016 Beta Update 1 - What's New

$
0
0

 

Intel® System Studio 2016 Beta  provides deep hardware and software insights to speed-up development, testing and optimization of Intel-based IoT, Intelligent Systems, mobile systems and embedded systems. Intel® System Studio 2016 Beta have added exciting new features such as enhanced Intel® Quark SoC, Edison and SoFIA support, improved Eclipse* integration, Wind River* Workbench* integration and Native code generation support for Intel® Graphics Technology on Linux* targets.

We also introduce Intel® System Studio 2016 Beta  for Windows* target with Microsoft* Visual Studio* integration. It adds support for Microsoft* Windows* targeted cross-development Microsoft* Windows* 7 and 8.1 releases. Additionally, in the Ultimate Edition it adds system debug, remote performance, power and thermal analysis. It is intended for use on Microsoft* Windows* host operating systems with the intention of deploying build results and doing sampling analysis on Microsoft* Windows* and Microsoft* Windows* Embedded target.

What's New in Intel® System Studio 2016 Beta Update 1

  1. Intel® C++ Compiler: several bug fixes. See Compiler release notes for more details.
  2. Intel® Integrated Performance Primitives (Intel® IPP): several internal bug fixes.
  3. Intel® System Debugger: new supported targets (e.g., Brickland Broadwell Server). See System Debugger release notes for more details. The debugger supports 64-bit host OS systems only and requires a 64-bit Java* Runtime Environment (JRE) to operate.

What's New in Intel® System Studio 2016 Beta

Component /Item

What’s new

New Platform support for latest Airmont, Intel® Quark™, Edison and SoFIA by various components.

Please check with us for early access to upcoming Processor support under non-disclosure agreement.

Use Intel® System Studio to develop system software and debug for all upcoming mobile embedded platforms

Intel® C++ Compiler

Support and optimizations for

  • Enhanced C++11 feature support
  • Enhanced C++14 feature support
  • FreeBSD* support
  • Added support for Red Hat Enterprise Linux* 7
  • Deprecated Red Hat Enterprise Linux* 5.

Intel® VTune™ Amplifier for Systems

  • Basic hotspots, Locks & Waits and EBS with stacks for RT kernel and RT application for Linux Targets
  • EBS based stack sampling for kernel mode threads
  • Support for Intel® Atom™ x7 Z8700 & x5 Z8500/X8400 processor series (Cherry Trail) including GPU analysis
  • KVM guest OS profiling from host based on Linux Perf tool
  • Support for analysis of applications in virtualized environment (KVM). Requires Linux kernels > 3.2 and Qemu version > 1.4
  • Automated remote EBS analysis on SoFIA  (by leveraging existing sampling driver on target)
  • Super Tiny display mode added for the Timeline pane to easily identify problem areas for results with multiple processes/threads
  • Platform window replacing Tasks and Frames window and providing CPU, GPU, and
  • Bandwidth metrics data distributed over time
  • General Exploration analysis views extended to display confidence indication (greyed
  • out font) for non-reliable metrics data resulted, for example, from the low number of collected samples
  • GPU usage analysis for OpenCL™ applications extended to display compute-originated batch buffers on the GPU software queue in the Timeline pane (Linux* target only)
  • New filtering mode for command line reports to display data for the specified column names only

Intel® Inspector for Systems

  • Added support for DWARF Version 4 symbolics.
  • Improved custom install directory process.
  • For Windows, added limited support for memory growth when analyzing applications containing Windows* fibers.

GDB* - The GNU Debugger

  • GDB Features
    • The version of GDB provided as part of Intel® System Studio 2016 is based on GDB version 7.8. Notably, it contains the following features added by Intel:
  • Data Race Detection (pdbx):
    • Detect and locate data races for applications threaded using POSIX* threads
  • Branch Trace Store (btrace):
    • Record branches taken in the execution flow to backtrack easily after events like crashes, signals, exceptions, etc.
  • Pointer Checker:
    • Assist in finding pointer issues if compiled with Intel® C++ Compiler and having
    • Pointer Checker feature enabled (see Intel® C++ Compiler documentation for more information)
  • Intel® Processor Trace (Intel® PT) Support:
    • Improved version of Branch Trace Store supporting Intel® TSX. For 5th generation Intel® Core™ Processors and later access it via command:
      • (gdb) record btrace pt
    • Those features are only provided for the command line version and are not supported via the Eclipse* IDE Integration.

 

Intel® Debugger for Heterogeneous Compute 2016 Features
The version of Intel® Debugger for Heterogeneous Compute 2016 provided as part of Intel® System Studio 2016 uses GDB version 7.6. It provides the following features:

  • Debugging applications containing offload enabled code to Intel® Graphics Technology
  • Eclipse* IDE integration

Intel® System Debugger

  • Support for Intel® Atom™ x7 Z8700 & x5 Z8500/X8400 processor series (Cherry Trail)
  • Several bug fixes and stability improvements

Intel® Threading Building Blocks

 

  • Added a C++11 variadic constructor for enumerable_thread_specific.
  • The arguments from this constructor are used to construct thread-local values.
  • Improved exception safety for enumerable_thread_specific.
  • Added documentation for tbb::flow::tagged_msg class and tbb::flow::output_port function.
  • Fixed build errors for systems that do not support dynamic linking.
  • C++11 move aware insert and emplace methods have been added to concurrent unordered containers

 

Product Contents of Intel® System Studio 2016 Beta Update 1 for Windows*

The product contains the following components

  1. Intel® C++ Compiler 16.0 Beta Update 1
  2. Intel® Integrated Performance Primitives 9.0 Beta Update 1
  3. Intel® Math Kernel Library 11.3 Beta .
  4. Intel® Threading Building Blocks 4.3 Update 4
  5. Intel® System Studio System Analyzer, Frame Analyzer and Platform Analyzer 2015 R1
  6. Intel® VTune™ Amplifier 2016 Beta for Systems with Intel® Energy Profiler
    • Intel® VTune™ Amplifier Sampling Enabling Product (SEP) 3.15
    • SoC Watch for Windows* 1.10.2
  7. Intel® Inspector 2016 Beta for Systems
  8. Intel® System Debugger 2016 Beta
    •  Intel® System Debugger notification module xdbntf.ko (provided under GNU General Public LIcense v2)
  9. OpenOCD 0.8.0 library (provided under GNU General Public License v2+)
    • OpenOCD 0.8.0 source (provided under GNU General Public License v2+)

Product Contents of Intel® System Studio 2016 Beta Update 1 for Windows* Host

The product contains the following components

  1. Intel® C++ Compiler 16.0 Beta Update 1
  2. Intel® Integrated Performance Primitives 9.0 Beta Update 1
  3. Intel® Math Kernel Library 11.3 Beta
  4. Intel® Threading Building Blocks 4.3 Update 4
  5. Intel® System Debugger 2016 Beta
    • Intel® System Debugger notification module xdbntf.ko (Provided under GNU General Public License v2)
  6. OpenOCD 0.8.0 library (Provided under GNU General Public License v2+)
    • OpenOCD 0.8.0 source (Provided under GNU General Public License v2+)
  7. GNU* GDB 7.8.1 (Provided under GNU General Public License v3)
    • Source of GNU* GDB 7.8.1 (Provided under GNU General Public License v3)
  8. SVEN Technology 1.0 (SDK provided under GNU General Public License v2)
  9. Intel® VTune™ Amplifier 2016 Beta for Systems with Intel® Energy Profiler 
    • Intel® VTune™ Amplifier Sampling Enabling Product (SEP) 3.15
    • Intel® Energy Profiler
    • WakeUp Watch for Android* 3.1.6
    • SoC Watch for Android* 1.5.4
  10. Intel® Inspector 2016 Beta for Systems
  11. Intel® System Studio System Analyzer 2015 R1

Product Contents of Intel® System Studio 2016 Beta Update 1 for Linux* Host

The product contains the following components

  1. Intel® C++ Compiler 16.0 Beta Update 1
  2. Intel® Integrated Performance Primitives 9.0 Beta Update 1 for Linux*
  3. Intel® Math Kernel Library 11.3 Beta for Linux*
  4. Intel® Threading Building Blocks 4.3 Update 4
  5. Intel® System Debugger 2016 Beta
    • Intel® System Debugger notification module xdbntf.ko (Provided under GNU General Public License v2)
  6. OpenOCD 0.8.0 library (Provided under GNU General Public License v2+)
    • OpenOCD 0.8.0 source (Provided under GNU General Public License v2+)
  7. GNU* GDB 7.8.1 (Provided under GNU General Public License v3)
    • Source of GNU* GDB 7.8.1 (Provided under GNU General Public License v3)
  8. SVEN Technology 1.0 (SDK provided under GNU General Public License v2)
  9. Intel® VTune™ Amplifier 2016 Beta for Systems with Intel® Energy Profiler
    • Intel® VTune™ Amplifier Sampling Enabling Product (SEP) 3.15
    • Intel® Energy Profiler
    • WakeUp Watch for Android* 3.1.6  
    • SoC Watch for Android* 1.5.4
  10. Intel® Inspector 2016 Beta for Systems
  11. Intel® System Studio System Analyzer 2015 R1

What's New and Product Contents of Intel® System Studio 2015

Product Contents of previous Intel® System Studio releases

 

Get Help or Advice

Getting Started?
Click the Learn tab for guides and links that will quickly get you started.
Support Articles and White Papers – Solutions, Tips and Tricks

Resources
Documentation
Training Material

Support

We are looking forward to your questions and feedback. Please don't hesitate to escalate any questions you have or issues you run into. We thank you for helping us to continuously improve Intel® System Studio

Intel® Premier Support – (registration is required) - For secure, web-based, engineer-to-engineer support, visit our Intel® Premier Support web site. Intel Premier Support registration is required. Once logged in search for the product name Intel® System Studio for Linux*.

Please provide feedback at any time:

 

Cordova CLI 4.1.2 Domain Whitelisting with Intel XDK for AJAX and Launching External Apps

$
0
0

Starting with Apache Cordova CLI 4.1.2, the security model now uses a concept called "whitelisting" to restrict the access to other domains from your app. The Cordova CLI recommendation is that by default you do not allow your app 1) to access other domains and 2) to launch the external apps via different domains. What this means is that, the AJAX calls will not work by default, and your app will not be able to launch external apps like phone, email, SMS or a browser. You have to specifically provide the appropriate settings to do so.

The current Intel® XDK default settings however provide access to other domains (AJAX calls) in the Build Settings (Projects tab) by putting * in the domain list. You are encouraged to replace * with specific domains wherever possible. To allow external application to launch from your app using different domains, you have to take extra step in the Build Settings UI i.e. click on “+ add another domain" and check the 'Allow external Application to launch from this domain" checkbox. Examples of external applications that could be launched include Phone, Email, SMS, Browser, and so on. The current UI is slightly confusing but you can set it up per your use.

For a detailed explanation of Cordova domain whitelisting please refer to the Cordova documentation

.

The rest of this document shows how you can set domain whitelisting in the Intel XDK build settings for your specific requirement.

Here are a few possible scenarios for your app:

  1. You do not want to access any domains from within your app (no AJAX), and you do not want your app to launch any external application like Phone, Email, SMS, Browser etc., your settings would be as follows (No Whitelist):

  1. To allow your app to access a specific domain, such as http://google.com or http://*.google.com or https://*.google.com, your settings would be as follows (Internal Whitelist):

  1. To allow your app to access all domains (if you are not sure which domain you will access from your app, or if you have a lot of domains that you are accessing through AJAX) then use * in the domain list box. This is the default setting that the Intel XDK provides with the templates and some of the sample apps (Internal Whitelist - access to all):

  1. If you do not want to use AJAX, but do want to launch external apps from your app through specific domains, then use settings like the following: Tel:*, SMS:*, mailto:* and http://* When using values like these make sure to set the checkbox for “Allow external applications to launch from this domain” (External Whitelist):

  1. To allow External apps to be launched from your app through all domains, consider these settings. For example, if your app has many other apps to be launched or you are not sure which ones, put * in the domain list and set the checkbox for “Allow external application to launch from this domain” (External Whitelist - allow all domains to launch from external app):

  1. To allow your app to access specific domains (AJAX) and allow external apps to be launched from your app through specific domains (like launching the phone app or the default browser), use the following settings (this is the recommended way to specify your settings for AJAX as well launching external applications) (Internal Whitelist and External Whitelist):

  1. To access all domains from within your app and to allow external apps to be launched from your app for all domains, (If you are not sure about the domains your app accesses or you have many multiple domains to access and you want multiple apps to be launched from your app through multiple domains), use this option. Please be aware that this option is the least secure since it results in the most security vulnerabilities in your app. (Internal Whitelist and External Whitelist - access all):

Note that with the current Cordova implementation the order of your domain lists matter, so make sure you specify your Internal Whitelists (AJAX case) first and then your External Whitelists (launching external app).

Intel® XDK FAQs - Cordova

$
0
0
Q1: How do I set app orientation?

If you are using Cordova* 3.X build options (Crosswalk* for Android*, Android*, iOS*, etc.), you can set the orientation under the Projects panel > Select your project > Cordova* 3.X Hybrid Mobile App Settings - Build Settings. Under the Build Settings, you can set the Orientation for your desired mobile platform.  

If you are using the Legacy Hybrid Mobile App Platform build options (Android*, iOS* Ad Hoc, etc.), you can set the orientation under the Build tab > Legacy Hybrid Mobile App Platforms Category- <desired_mobile_platform> - Step 2 Assets tab. 

[iPad] Create a plugin (directory with one file) that only has a config xml that includes the following: 

<config-file target="*-Info.plist" parent="UISupportedInterfaceOrientations~ipad" overwrite="true"><string></string></config-file><config-file target="*-Info.plist" parent="UISupportedInterfaceOrientations~ipad" overwrite="true"><array><string>UIInterfaceOrientationPortrait</string></array></config-file> 

Add the plugin on the build settings page. 

Alternatively, you can use this plugin: https://github.com/yoik/cordova-yoik-screenorientation. You can import it as a third-party Cordova* plugin using the Cordova* registry notation:

  • net.yoik.cordova.plugins.screenorientation (includes latest version at the time of the build)
  • net.yoik.cordova.plugins.screenorientation@1.3.2 (specifies a version)

Or, you can reference it directly from the GitHub repo: 

The second reference provides the git commit referenced here (we do not support pulling from the PhoneGap registry).

Q2: Is it possible to create a background service using Intel XDK?

Background services require the use of specialized Cordova* plugins that need to be created specifically for your needs. Intel XDK does not support development or debug of plugins, only the use of them as "black boxes" with your HTML5 app. Background services can be accomplished using Java on Android or Objective C on iOS. If a plugin that backgrounds the functions required already exists (for example, this plugin for background geo tracking), Intel XDK’s build system will work with it.

Q3: How do I send an email from my App?
You can use the Cordova* email plugin or use web intent - PhoneGap* and Cordova* 3.X.
Q4: How do you create an offline application?
You can use the technique described here by creating an offline.appcache file and then setting it up to store the files that are needed to run the program offline. Note that offline applications need to be built using the Cordova* or Legacy Hybrid build options.
Q5: How do I work with alarms and timed notifications?
Unfortunately, alarms and notifications are advanced subjects that require a background service. This cannot be implemented in HTML5 and can only be done in native code by using a plugin. Background services require the use of specialized Cordova* plugins that need to be created specifically for your needs. Intel XDK does not support the development or debug of plugins, only the use of them as "black boxes" with your HTML5 app. Background services can be accomplished using Java on Android or Objective C on iOS. If a plugin that backgrounds the functions required already exists (for example, this plugin for background geo tracking) the Intel XDK’s build system will work with it. 
Q6: How do I get a reliable device ID? 
You can use the Phonegap/Cordova* Unique Device ID (UUID) plugin for Android*, iOS* and Windows* Phone 8. 
Q7: How do I implement In-App purchasing in my app?
There is a Cordova* plugin for this. A tutorial on its implementation can be found here. There is also a sample in Intel XDK called ‘In App Purchase’ which can be downloaded here.
Q8: How do I install custom fonts on devices?
Fonts can be considered as an asset that is included with your app, not shared among other apps on the device just like images and CSS files that are private to the app and not shared. It is possible to share some files between apps using, for example, the SD card space on an Android* device. If you include the font files as assets in your application then there is no download time to consider. They are part of your app and already exist on the device after installation.
Q9: How do I access the device’s file storage?
You can use HTML5 local storage and this is a good article to get started with. Alternatively, there is a Cordova* file plugin for that.
Q10: Why isn't AppMobi* push notification services working?
This seems to be an issue on AppMobi’s end and can only be addressed by them. PushMobi is only available in the "legacy" container. AppMobi* has not developed a Cordova* plugin, so it cannot be used in the Cordova* build containers. Thus, it is not available with the default build system. We recommend that you consider using the Cordova* push notification plugin instead.
Q11: How do I configure an app to run as a service when it is closed?
If you want a service to run in the background you'll have to write a service, either by creating a custom plugin or writing a separate service using standard Android* development tools. The Cordova* system does not facilitate writing services.
Q12: How do I dynamically play videos in my app?

1) Download the Javascript and CSS files from https://github.com/videojs

2) Add them in the HTML5 header. 

<config-file target="*-Info.plist" parent="UISupportedInterfaceOrientations~ipad" overwrite="true"><string></string></config-file><config-file target="*-Info.plist" parent="UISupportedInterfaceOrientations~ipad" overwrite="true"><array><string>UIInterfaceOrientationPortrait</string></array></config-file> 

 3) Add a panel ‘main1’ that will be playing the video. This panel will be launched when the user clicks on the video in the main panel.

<div class=”panel” id=”main1” data-appbuilder-object=”panel” style=””><video id=”example_video_1” class=”video-js vjs-default-skin” controls=”” preload=”auto” width=”200” poster=”camera.png” data-setup=”{}”><source src=”JAIL.mp4” type=”video/mp4”><p class=”vjs-no-js”>To view this video please enable JavaScript*, and consider upgrading to a web browser that <a href=http://videojs.com/html5-video-support/ target=”_blank”>supports HTML5 video</a></p></video><a onclick=”runVid3()” href=”#” class=”button” data-appbuilder-object=”button”>Back</a></div>

 4) When the user clicks on the video, the click event sets the ‘src’ attribute of the video element to what the user wants to watch. 

Function runVid2(){

      Document.getElementsByTagName(“video”)[0].setAttribute(“src”,”appdes.mp4”);

      $.ui.loadContent(“#main1”,true,false,”pop”);

}

 5) The ‘main1’ panel opens waiting for the user to click the play button.

Note: The video does not play in the emulator and so you will have to test using a real device. The user also has to stop the video using the video controls. Clicking on the back button results in the video playing in the background.

Q13: How do I design my Cordova* built Android* app for tablets?
This page lists a set of guidelines to follow to make your app of tablet quality. If your app fulfills the criteria for tablet app quality, it can be featured in Google* Play's "Designed for tablets" section.
Q14: How do I resolve icon related issues with Cordova* CLI build system?

Ensure icon sizes are properly specified in the intelxdk.config.additions.xml. For example, if you are targeting iOS 6, you need to manually specify the icons sizes that iOS* 6 uses. 

<icon platform="ios" src="images/ios/72x72.icon.png" width="72" height="72" /><icon platform="ios" src="images/ios/57x57.icon.png" width="57" height="57" />

These are not required in the build system and so you will have to include them in the additions file. 

For more information on adding build options using intelxdk.config.additions.xml, visit: /en-us/html5/articles/adding-special-build-options-to-your-xdk-cordova-app-with-the-intelxdk-config-additions-xml-file

Q15: Is there a plugin I can use in my App to share content on social media?

Yes, you can use the PhoneGap Social Sharing plugin for Android*, iOS* and Windows* Phone.

Q16: Iframe does not load in my app. Is there an alternative?
Yes, you can use the inAppBrowser plugin instead.
Q17: Why are intel.xdk.istablet and intel.xdk.isphone not working?
Those properties are quite old and is based on the legacy AppMobi* system. An alternative is to detect the viewport size instead. You can get the user’s screen size using screen.width and screen.height properties (refer to this article for more information) and control the actual view of the webview by using the viewport meta tag (this page has several examples). You can also look through this forum thread for a detailed discussion on the same.
Q18: How do I work with the App Security plugin on Intel XDK?

Select the App Security plugin on the plugins list of the Project tab and build your app as a Cordova Hybrid app. Building it as a Legacy Hybrid app has been known to cause issues when compiled and installed on a device.

Q19: Why does my build fail with Admob plugins? Is there an alternative?

Intel XDK does not support the library project that has been newly introduced in the com.google.playservices@21.0.0 plugin. Admob plugins are dependent on "com.google.playservices", which adds Google* play services jar to project. The "com.google.playservices@19.0.0" is a simple jar file that works quite well but the "com.google.playservices@21.0.0" is using a new feature to include a whole library project. It works if built locally with Cordova CLI, but fails when using Intel XDK.

To keep compatible with Intel XDK, the dependency of admob plugin should be changed to "com.google.playservices@19.0.0".

Q20: Why does the intel.xdk.camera plugin fail? Is there an alternative?
There seem to be some general issues with the camera plugin on iOS*. An alternative is to use the Cordova camera plugin, instead and change the version to 0.3.3.
Q21: How do I resolve Geolocation issues with Cordova?

Give this app a try, it contains lots of useful comments and console log messages. However, use Cordova 0.3.10 version of the geo plugin instead of the Intel XDK geo plugin. Intel XDK buttons on the sample app will not work in a built app because the Intel XDK geo plugin is not included. However, they will partially work in the Emulator and Debug. If you test it on a real device, without the Intel XDK geo plugin selected, you should be able to see what is working and what is not on your device. There is a problem with the Intel XDK geo plugin. It cannot be used in the same build with the Cordova geo plugin. Do not use the Intel XDK geo plugin as it will be discontinued.

Geo fine might not work because of the following reasons:

  1. Your device does not have a GPS chip
  2. It is taking a long time to get a GPS lock (if you are indoors)
  3. The GPS on your device has been disabled in the settings

Geo coarse is the safest bet to quickly get an initial reading. It will get a reading based on a variety of inputs, but is usually not as accurate as geo fine but generally accurate enough to know what town you are located in and your approximate location in that town. Geo coarse will also prime the geo cache so there is something to read when you try to get a geo fine reading. Ensure your code can handle situations where you might not be getting any geo data as there is no guarantee you'll be able to get a geo fine reading at all or in a reasonable period of time. Success with geo fine is highly dependent on a lot of parameters that are typically outside of your control.

Q22: Is there an equivalent Cordova* plugin for intel.xdk.player.playPodcast? If so, how can I use it?

Yes, there is and you can find the one that best fits the bill from the Cordova* plugin registry.

To make this work you will need to do the following:

  • Detect your platform (you can use uaparser.js or you can do it yourself by inspecting the user agent string)
  • Include the plugin only on the Android* platform and use <video> on iOS*.
  • Create conditional code to do what is appropriate for the platform detected 

You can force a plugin to be part of an Android* build by adding it manually into the additions file. To see what the basic directives are to include a plugin manually:

  1. Include it using the "import plugin" dialog, perform a build and inspect the resulting intelxdk.config.android.xml file.
  2. Then remove it from your Project tab settings, copy the directive from that config file and paste it into the intelxdk.config.additions.xml file. Prefix that directive with <!-- +Android* -->. 

More information is available here and this is what an additions file can look like:

<preference name="debuggable" value="true" /><preference name="StatusBarOverlaysWebView" value="false" /><preference name="StatusBarBackgroundColor" value="#000000" /><preference name="StatusBarStyle" value="lightcontent" /><!-- -iOS* --><intelxdk:plugin intelxdk:value="nl.nielsad.cordova.wifiscanner" /><!-- -Windows*8 --><intelxdk:plugin intelxdk:value="nl.nielsad.cordova.wifiscanner" /><!-- -Windows*8 --><intelxdk:plugin intelxdk:value="org.apache.cordova.statusbar" /><!-- -Windows*8 --><intelxdk:plugin intelxdk:value="https://github.com/EddyVerbruggen/Flashlight-PhoneGap-Plugin" />

This sample forces a plugin included with the "import plugin" dialog to be excluded from the platforms shown. You can include it only in the Android* platform by using conditional code and one or more appropriate plugins.

Q23: How do I display a webpage in my app without leaving my app?

The most effective way to do so is by using inAppBrowser.

Q24: Does Cordova* media have callbacks in the emulator?

While Cordova* media objects have proper callbacks when using the debug tab on a device, the emulator doesn't report state changes back to the Media object. This functionality has not been implemented yet. Under emulation, the Media object is implemented by creating an <audio> tag in the program under test. The <audio> tag emits a bunch of events, and these could be captured and turned into status callbacks on the Media object.

Q25: Why does the Cordova version not match between the Projects tab Build Settings, Emulate tab, App Preview and my built app?

This is due to the difficulty in keeping different components in sync and is compounded by the version convention that the Cordova project uses to distinguish build tools (the CLI version) from frameworks (the Cordova version) and plugins.

The CLI version you specify in the Projects tab Build Settings section is the "Cordova CLI" version that the build system will use to build your app. Each version of the Cordova CLI tools come with a set of "pinned" Cordova framework versions, which vary as a function of the target platform. For example, the Cordova CLI 5.0 platformsConfig file is "pinned" to the Android Cordova framework version 4.0.0, the iOS Cordova framework version 3.8.0 and the Windows 8 Cordova framework version 3.8.1 (among other targets). The Cordova CLI 4.1.2 platformsConfig file is "pinned" to Android Cordova 3.6.4, iOS Cordova 3.7.0 and Windows 8 Cordova 3.7.1.

This means that the Cordova framework version you are using "on device" with a built app will not equal the version number that is in the CLI field that you specified in the Build Settings section of the Projects tab when you built your app. Technically, the target-specific Cordova frameworks can be updated [independently] within a given version of CLI tools, but our build system always uses the Cordova framework versions that were "pinned" to the CLI when it was released (that is, the Cordova framework versions specified in the platformsConfig file).

The reason you may see Cordova framework version differences between the Emulate tab, App Preview and your built app is:

  • The Emulate tab has one specific Cordova framework version it is built against. We try to make that version of the Cordova framework match as closely the default Intel XDK version of Cordova CLI.
  • App Preview is released independently of the Intel XDK and, therefore, may support a different version than what you will see reported by the Emulate tab and your built app. Again, we try to release App Preview so it matches the version of the Cordova framework that is the default version of the Intel XDK at the time App Preview is released; but since the various tools are not released in perfect sync, that is not always possible.
  • Your app always uses the Cordova framework version that is determined by the Cordova CLI version you specified in the Projects tab's Build Settings section, when you built your app.
  • BTW: the version of the Cordova framework that is built into Crosswalk is determined by the Crosswalk project, not by the Intel XDK build system. There is some customization the Crosswalk project team must do to the Cordova framework to include Cordova as part of the Crosswalk runtime engine. The Crosswalk project team generally releases each Crosswalk version with the then current version of the Android Cordova framework. Thus, the version of the Android Cordova framework that is included in your Crosswalk build is determined by the version of Crosswalk you choose to build against.

Do these Cordova framework version numbers matter? Not that much. There are some issues that come up that are related to the Cordova framework version, but they tend to be few and far between. The majority of the bugs and compatibility issues you will experience in your app have more to do with the versions and mix of Cordova plugins you choose and the specific webview present on your test devices. See this blog for more details about what a webview is and why the webview matters to your app: When is an HTML5 Web App a WebView App?.

p.s. The "default version" of the CLI that the Intel XDK uses is rarely the most recent version of the Cordova CLI tools distributed by the Cordova project. There is always a lag between Cordova project releases and our ability to incorporate those releases into our build system and the various Intel XDK components. Also, we are unable to implement every release that is made by the Cordova project; thus the reason why we do not support every Cordova release that is available to Cordova CLI users.

Q26: How do I add a third party plugin?
Please follow the instructions on this doc page to add a third-party plugin: Adding Plugins to Your Intel® XDK Cordova* App -- this plugin is not being included as part of your app. You will see it in the build log if it was successfully added to your build.
Q27: How do I make an AJAX call that works in my browser work in my app?
Please follow the instructions in this article: Cordova CLI 4.1.2 Domain Whitelisting with Intel XDK for AJAX and Launching External Apps.
Q28: I get an "intel is not defined" error, but my app works in Test tab, App Preview and Debug tab. What's wrong?

When your app runs in the Test tab, App Preview or the Debug tab the intel.xdk and core Cordova functions are automatically included for easy debug. That is, the plugins required to implement those APIs on a real device are already included in the corresponding debug modules.

When you build your app you must include the plugins that correspond to the APIs you are using in your build settings. This means you must enable the Cordova and/or XDK plugins that correspond to the APIs you are using. Go to the Projects tab and insure that the plugins you need are selected in your project's plugin settings. See Adding Plugins to Your Intel® XDK Cordova* App for additional details.

Back to FAQs Main 

Intel® XDK FAQs - Crosswalk

$
0
0
Q1: How do I play audio with different playback rates?

Here is a code snippet that allows you to specify playback rate:

var myAudio = new Audio('/path/to/audio.mp3');
myAudio.play();
myAudio.playbackRate = 1.5;
Q2: Why are Intel XDKs Android build files so large?

If your app has been built with Crosswalk, it will be a minimum of 15-18MB in size because it includes a complete web browser to use instead of the built-in webview on the device. Despite the size, this is the preferred solution for Android, because the built-in webviews on the majority of Android devices are inconsistent.

When using the "Legacy" build option, changing the code base from "gold" to "lean" will reduce the size of your APK, but the "lean” option also excludes the Cordova 2.9 library components (among other elements). Investing time and effort in the legacy build system is not recommended as it has been deprecated and will be obsolete sometime during 2015. The legacy build system also cannot take advantage of the numerous Cordova plugins that are available for use with the Cordova and Crosswalk build systems.

Q3: Why does my Android Crosswalk build fail with the com.google.playservices plugin? [Plugin]

The Intel XDK Crosswalk build system does not support the library project format that was introduced in the com.google.playservices@21.0.0 plugin. Use "com.google.playservices@19.0.0" instead.

Q4: Why is the size of my installed app much larger than the apk for a Crosswalk application?

This is because the apk is a compressed image, so when installed it occupies more space due to being decompressed. Also, when your Crosswalk app starts running on your device it will create some data files for caching purposes which will increase the installed size of the application.

Q5: Why does my app fail to run on some devices?

There are some Android devices in which the GPU hardware/software subsystem does not work properly. This is typically due to poor design or improper validation by the manufacturer of that Android device. Your problem Android device probably falls under this category.

Note that each iteration of the Crosswalk system is based on more recent versions of the Chromium project. Each new version of the Chromium project has become more "agressive" with regard to its use of the GPU subsystem on Android devices. Our experience has been that the Crosswalk 7 build is the least aggressive regarding the use of the GPU subsystem and generally runs on the widest array of Android devices. If you desire maximum compatibility, you should use the Crosswalk 7 build option.

Q6: How to I stop the "pull to refresh" from resetting and restarting my Crosswalk app?

See the code posted in this forum thread for a solution: https://software.intel.com/en-us/forums/topic/557191#comment-1827376.

Back to FAQs Main 

Brick by Brick: Building a Better Game with LEGO* Minifigures Online

$
0
0

Download  Lego Minifigures Optimization.pdf

Game makers now enjoy unprecedented market opportunity by offering titles that deliver advanced gaming experiences on both PCs that run Microsoft Windows* and on mobile devices that run Android*. Optimizing graphics for Intel® Core™ processors as well as Intel® Atom™ processors is rapidly becoming a strategic imperative.

With the evolution of mobile gaming beyond its roots in casual games, revenue projections in this segment are growing dramatically. In fact, market research firm Newzoo projects that mobile games will replace consoles as the largest game segment by revenue in 2015, reaching USD 30.0 billion in 2015 and USD 40.9 billion by 2017.1

Helping cement its more than 20 years of providing well-regarded games, Funcom developed LEGO* Minifigures Online (LMO) with both Intel® architecture-based 2 in 1 PCs and Android tablets as primary target devices. The company’s optimizations provide exceptional graphical experiences on both platforms, building on recognized successes by Funcom that include The Longest Journey (ranked number 59 on the MetaCritic list of the top 100 PC games of all time)2, as well as Anarchy Online*, Age of Conan*, and The Secret World*.

Advanced Pixel Synchronization Effects for Intel® Graphics Technology

The current generation of Intel® graphics hardware extends Intel’s leadership in enabling innovation across the industry, including being fully ready for DirectX* 12 and driving the adoption of advanced features by next-generation games. An excellent example is Intel’s pixel synchronization extension for DirectX 11, which enables programmable blending operations.

This set of capabilities is being widely adopted, becoming a part of the DirectX 12 standard (under the name Raster Ordered Views), being supported by graphics hardware from other manufacturers (such as Nvidia Maxwell*), and being enabled in OpenGL* with the GL_INTEL_fragment_shader_ordering extension.

Intel’s pixel synchronization extension gives developers control over the ordering of pixel shader operations. It can be used to implement functions such as custom blending, advanced volumetric shadows, and order-independent transparency. It provides a way to serialize and synchronize access to a pixel from multiple pixel shaders and to guarantee deterministic pixel changes. On Intel® hardware, the serialization is limited to directly overlapping pixels, so performance remains unchanged for the rest of the code.

Examples of algorithms that are enabled by this set of features include the following:

LEGO Minifigures Online uses AVSM to achieve advanced smoke and cloud effects on both Windows and Android. Comparisons of game scenes on Intel processor-based 2 in 1 PCs with AVSM disabled versus the same scenes with AVSM enabled are shown in Figures 1 through 4. Enhanced graphics quality using AVSM in these scenes provides a more realistic and immersive gaming experience that will also be made available for Android tablets based on Intel Atom x5 and x7 processors.


Figure 1.Actually Hopping Antelope – Level 2” scene with AVSM disabled.


Figure 2.Actually Hopping Antelope – Level 2” scene with AVSM enabled.


Figure 3.Scarlet Serrated Brainiac – Level 5” scene with AVSM disabled.


Figure 4.Scarlet Serrated Brainiac – Level 5” scene with AVSM enabled.

Cross-Platform Playability and Scaling

LEGO Minifigures Online has been optimized for 4th generation Intel Core processors. It also provides support for both laptop and tablet modes on 2 in 1 PCs as shown in Figures 5 and 6, giving users the raw horsepower of the laptops they love in a more casual environment by converting the device to tablet mode. This flexibility allows gamers to play LMO when they want, where they want, in the mode they want – giving them more opportunity to play.


Figure 5.Scarlet Serrated Brainiac - Level 5” scene in Laptop Mode.

Notice the larger, more conveniently located touch icons for gamers.


Figure 6.Scarlet Serrated Brainiac - Level 5” scene in Tablet Mode.

The enhanced graphics capabilities across Intel® platforms make it possible for users on high-end Windows desktops, Windows laptops, 2 in 1 devices, and Intel Atom processor-based tablets running both Windows and Android, to all play together in the same immersive game world.

Improved Battery Life on Intel® Core™ Processors

Optimizing games to reduce power consumption is not only an important aspect of the user experience, but it can also be a critical component to getting favorable reviews. The releases of many otherwise well-received games have been marred by the dreaded one-star reviews dominated by the phrase “kills the battery.”

Intel and Funcom worked together to add Battery Saving Mode as a user-controlled option in LEGO Minifigures Online, as illustrated in Figure 7. This capability can extend battery life by nearly 80 percent on 4th generation Intel Core processors and more than 100 percent on 5th generation Intel Core processors.3


Figure 7.Battery Saving Mode in LEGO* Minifigures Online.

The fundamental approach to improving battery life is to reduce the amount of work for the processor and GPU. Battery Saving Mode in LEGO Minifigures Online achieves that goal by capping framerate at 30 frames per second, disabling anisotropic filtering, post-processing FX, and anti-aliasing.

The overall effect of these measures is to reduce frame draw time, allowing the processor and GPU to enter deeper sleep states during periods of inactivity, thus improving battery life. Details of these battery-life optimizations are available in the Game Developer Conference 2015 presentation, “Power Efficient Programming: How Funcom increased play time in Lego Minifigures by 80%.”

Optimization for Android Devices Based on Intel® Atom™ Processors

Successfully shipping more than its goal of 40 million processors for tablets in 20144, Intel has become one of the largest silicon providers for tablets and a growing force in the Android market segment. Intel is extending this drive into 2015 with the introduction of the Intel Atom x5 and x7 processors, based on industry-leading 14 nm manufacturing process technology and compact, low-power system-on-chip (SoC) designs.

  • Performance improvements for gaming include Gen 8 graphics, as well as support for 64-bit processing and multi-tasking.
  • Enhanced battery life is provided by capabilities that include Intel® Display Power Saving Technology and Intel® Display Refresh Rate Switching Technology to help reduce panel backlight and refresh rate opportunistically.

An initial focus for performance improvement of LEGO Minifigures Online on Android devices was native compilation for Intel platforms. Non-native binaries, such as those compiled for ARM*, must be run by the Intel Atom processor using just-in-time compilation, which incurs additional processing overhead, interferes with advanced offline compilation optimizations, and increases loading times.

Intel worked with Funcom to ensure that Android installation packages include native binaries for Intel architecture, overcoming those previous limitations. In fact, providing this support for Android games using the Unity* game engine is straightforward, as discussed in the Intel® Developer Zone article, “Adding x86 Support to Android* Apps Using the Unity* Game Engine.” Further information is available in the articles, “Google Play* Store Submission Process: Android* APK” and “How to Publish Your Apps on Google Play* For x86-based Android* Devices Using Multiple APK Support.”

Conclusion

Intel architecture provides a compelling set of opportunities for game developers to expand their potential market segment share. Optimized games can deliver excellent graphical user experiences across the full range of target systems—from high-end desktop systems, to laptop PCs, 2 in 1s, and Intel Atom processor-based tablets. Enabling gameplay that responds to the needs of each platform supports broader usability and prepares game companies to benefit from ongoing expansion of mobile gaming in the years to come.

About the Authors

Filip Strugar is a former game developer, now working for Intel as a Software Graphics Engineer. He enjoys working on various algorithms, inventing things like CMAA and helping game developers in making their games run best on Intel graphics hardware.

Landyn Pethrus is an engineer at Intel, avid gamer, and hardware enthusiast.  When Landyn is not fountain sniping with Ancient Apparition in Dota2, slaying bosses, or pursuing higher level education, he can be found on the rivers of Oregon fishing.

For more information, visit the Intel Game Developer Community at https://software.intel.com/en-us/gamedev/tools

1   Newzoo BV, “Global Mobile Games Revenues to Reach $25 Billion in 2014.” October 29, 2014. www.newzoo.com/insights/global-mobile-games-revenues-top-25-billion-2014/.

2   CBS Interactive as of April 25, 2015. www.metacritic.com/browse/games/score/metascore/all/pc.

3   Source : Internal Intel® battery rundown tests. See details at https://software.intel.com/sites/default/files/managed/4a/38/Power_Efficient_Programming_GDC_2015_Final.pdf.

4   Brian M. Krzanich, Intel CEO Letter to Shareholders, Intel 2014 Annual Report. http://www.intc.com/common/download/download.cfm?companyid=INTC&fileid=819111&filekey=43FE7343-2D01-42E3-A09C-99A3BDEAEEE9&filename=Intel_2014_Annual_Report.pdf.

 

Notices

No license (express or implied, by estoppel or otherwise) to any intellectual property rights is granted by this document.

Intel disclaims all express and implied warranties, including without limitation, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement, as well as any warranty arising from course of performance, course of dealing, or usage in trade.

This document contains information on products, services and/or processes in development. All information provided here is subject to change without notice. Contact your Intel representative to obtain the latest forecast, schedule, specifications and roadmaps.

The products and services described may contain defects or errors known as errata which may cause deviations from published specifications. Current characterized errata are available on request.

Copies of documents which have an order number and are referenced in this document may be obtained by calling 1-800-548-4725 or by visiting www.intel.com/design/literature.htm.

Intel, the Intel logo, Intel Atom, and Intel Core are trademarks of Intel Corporation in the U.S. and/or other countries.

*Other names and brands may be claimed as the property of others

© 2015 Intel Corporation.

Quick Installation Guide for Media SDK on Windows with Intel® INDE

$
0
0

Intel® INDE provides a comprehensive toolset for developing Media applications targeting both CPU and GPUs, enriching the development experience of a game or media developer. Yet, if you got used to work with the legacy Intel® Media SDK or if you just want to get started using those tools quickly, you can follow these steps and install only the Media SDK components of Intel® INDE.

Go to the Intel® INDE Web page, select the edition you want to download and hit Download link:

At the Intel INDE downloads page select Online Installer (9 MB):

At the screen, where you need to select, which IDE to integrate Getting Started tools for Android* development, click Skip IDE Integration and uncheck the Install Intel® HAXM check box:

At the component selection screen, select only Media SDK for Windows, Media RAW Accelerator for Windows, Audio for Windows and Media for Mobile in the build category (you are welcome to select any additional components that you need as well), and click Next. Installer will install all the Media SDK  components.

Complete the installation and restart your computer. Now you are ready to start building media applications with Intel® Media SDK components!

If later you decide that you need to install additional components of the Intel® INDE suite, rerun the installer, select Modify option to change installed features:

and then you can select additional components that you need:

Complete the installation and restart your computer. Now you are ready to start using additional components of the Intel® INDE suite!

 

Optimizing Unity* Games on Android* OS for Intel® Architecture: A Case Study

$
0
0

Download Document

Unity* is one of the most popular game engines for the mobile environment (Android* and iOS*), and many developers are using it to develop and launch games. Before Unity supported Android on Intel platforms, games were executed on an emulator that changed ARM* native code to Intel native code. Some non-native x86 games running on Intel platforms did not work at all and others had performance issues. With the growth in mobile market share of Intel processors, many developers are now interested in supporting Android on x86 architecture and want to know how to optimize their games.

This article will show a performance gain with native support on Android and share some tips for increasing performance on Intel® architecture using Hero Sky: Epic Guild Wars as an example.


Figure 1. Hero Sky: Epic Guild Wars

Innospark, maker of Hero Sky: Epic Guild Wars, has significant experience in mobile game development using a variety of commercial game engines and also has its own in-house game engine. Hero Sky: Epic Guild Wars is its first Unity-based game launched for the global market. With an increasing number of downloads from the Google Play* store, the company began to get complaints that the game did not work and that it lagged on some Intel processor-based devices with Android . So Innospark decided to port and optimize the game for Android OS on Intel architecture. This article explains what Innospark did for optimization with profiling results from Intel® Graphics Performance Analyzers (Intel® GPA), like changing drawing order and removing unneeded alpha blending.

Introduction

Hero Sky: Epic Guild Ward is an online combat strategy style game supporting full 3D graphics. Innospark developed and optimized it on an Intel® Atom™ processor-based platform (code named Bay Trail). The Bay Trail reference design and specifications are listed below.

CPU

Intel® Atom™ processor

Quad Core 1.46 Ghz

OS

Android* 4.4.4

RAM

2GB

Resolution

1920x1200

3DMark* ICE Storm Unlimited Score

10,386

Graphics score

9,274

Physics score

17,899

Table 1. Bay Trail 8” reference design specification and 3DMark* score

Below is a graph showing a performance comparison between non-native x86 and native x86 code on the Bay Trail reference design.


Figure 2. Performance gains with x86 native support

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark* and MobileMark*, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. For more information go to http://www.intel.com/performance.

After the game was ported for Android on Intel architecture, the CPU load decreased about 7.1%, FPS increased about 27.8% and execution time decreased about 32.6%. However, GPU Busy increased about 26.7% because FPS increased.

Innospark used Intel GPA to find CPU and GPU bottlenecks during development and used the analysis to solve graphics issues and performance.

Intel GPA System Analyzer measured 59.01 FPS as the baseline performance. Graphics Frame Analyzer, which measures FPS only on the GPU side, measured 120.9 FPS. The reason the FPSs are different is that System Analyzer is monitoring live activity of the process, which includes both CPU and GPU work and Graphics Frame Analyzer includes GPU-related work with the CPU activities directly related to submission of data to the driver and GPU.

Deep-dive analysis using Graphics Frame Analyzer


Figure 3 Screen capture of the baseline version

After being ported, the game showed 59.01 FPS. We analyzed it in more detail using the Graphics Frame Analyzer in order to decrease the GPU Busy and CPU Load. The tables below show the information captured using the Graphics Frame Analyzer.

Total Primitive Count

4,376

GPU Duration, ms

8.56 ms

Time to show frame, ms

9.35 ms

Table 2. Baseline frame information

 

Type

Erg

GPU Duration (ms)

GPU Memory Read(MB)

GPU Memory Write(MB)

Sky

1

1.43 ms

0.2 MB

7.6 MB

Terrain

5

1.89 ms

9.4 MB

8.2 MB

Table 3. The high draw call cost of the baseline version

 

Analyze and optimize high draw call

Remove unneeded alpha blending


When a display object uses alpha blending, the runtime must combine the color values of every stacked display object and the background color to determine the final color. Thus, alpha blending can be more processor-intensive than drawing an opaque color. This extra computation can hurt performance on slow devices. So we want to remove unneeded alpha blending.

The Graphics Frame Analyzer can enable or disable each drawing call so a developer can test and measure without source modification. This feature is in the Blend State tab under the State tab.


Figure 4. How to experiment Enable/Disable alpha blending on Graphics Frame Analyzer without source modification.

The table below shows more detailed information about drawing call of the grass after disabled alpha blending and the GPU Duration of the grass is decreased about 26.0%. Also notice that the GPU Memory Read is decreased about 97.2%.

 

Baseline

Changed drawing order(sky)

GPU Clocks

1,466,843

1,085,794.5

GPU Duration, us

1,896.6 us

1,398.4 us

GPU Memory Read, MB

7.6 MB

0.2 MB

GPU Memory Write, MB

8.2 MB

8.2 MB

Table 4. Detailed information of drawing call after disabled alpha blending

 

Apply Z-culling efficiently


When an object is rendered by the 3D graphics card, the 3D data is changed into 2D data (x-y), and the Z-buffer, or depth buffer, is used to store the depth information (z coordinate) of each screen pixel. If two objects of the scene must be rendered in the same pixel, the GPU compares the two depths and overrides the current pixel if the new object is closer to the observer. The process of Z-culling reproduces the usual depth perception correctly by drawing the closest objects first so that a closer object hides a farther one. Z-culling provides performance improvement when rendering hidden surfaces.

Game has two kinds of terrain drawing: sky and grass drawing. The Erg 1 drawing call is for the sky and Erg 5 is the drawing call for the grass. Because large portions of sky are behind grass, lots of sky areas never show during the game. However, the sky was rendered earlier than the grass, which prevented efficient Z-culling.


Figure 5. Drawing call for sky(erg 1) and grass(erg5)

Below is the GPU duration of the sky after changing the drawing order.


Figure 6. Result after changing the drawing order of sky on Graphics Frame Analyzer.

The table below shows more detailed information about the sky after changing the drawing order, and the GPU Duration of grass is decreased about 88.0%. Notice how the GPU Memory Write is decreased about 98.9%.

 

Baseline

Changed drawing order(sky)

GPU Clocks

1,113,276

133,975

GPU Duration, us

1,433 us

174.2 us

Early Z Failed

0

2,145,344

Sample Written

2,165,760

20,416

GPU Memory Read, MB

0.2 MB

0.0 MB

GPU Memory Write, MB

9.4 MB

0.1 MB

Table 5. Detailed information of drawing call after changed drawing order(sky)

 

Results

The next table shows the more detailed data of x86 optimization after removing unneeded alpha blending and changing the drawing order. GPU Duration is decreased about 25% and GPU Memory Read/Write is decreased about 42.6% and 30.0%, respectively. System Analyzer showed the FPS only increased 1.06 because Android uses vsync mode and max FPS is 60 fps, but the FPS on Graphics Frame Analyzer increased about 29.7%.

 

X86 Baseline

X86 optimized

GPU Clocks

6,654,210

4,965,478

GPU Duration, us

8,565.2 us

6,386 us

Early Z Failed

16,592

2,248,450

Sample Written

6,053,311

2,813,997

GPU Memory Read, MB

20.9 MB

12.0 MB

GPU Memory Write, MB

28.6 MB

20.0 MB

FPS on System Analyzer

59.01

60.07

FPS on Graphics Frame Analyzer

120.9

156.8

Table 6. Performance gains after disable alpha blending and changed drawing order(sky)

 


Figure 7. Performance gains after optimized x86 native support

Conclusion

When you start to optimize a game on Android x86, first developers should port their games for Android x86 and next determine where the application bottleneck is. Profiling tools can help you measure performance and see more easily where performance issues are on the GPU side. Intel GPA’s powerful analytic tools can provide the ability to experiment without any source modification.

About the Authors

Jackie Lee is an Applications Engineer with Intel's Software Solutions Group, focused on performance tuning of applications on Intel Atom platforms. Prior to Intel, Jackie Lee worked at LG in the electronics CTO department. He received his MS and BS in Computer Science and Engineering from ChungAng University.

References


Intel® Graphics Performance Analyzers
https://software.intel.com/en-us/gpa

Innospark
http://www.innospark.com/#!home-en/c1vtc

Hero Sky: Epic Guild Wars
https://play.google.com/store/apps/details?id=com.innospark.herosky

Unity
http://unity3d.com

Unity Native X86 Support Shines for Square Enix’s Hitman GO*
https://software.intel.com/en-us/articles/unity-native-x86-support-shines-for-square-enix-s-hitman-go

Alpha Blending
http://help.adobe.com/en_US/as3/mobile/WS4bebcd66a74275c36c11f3d612431904db9-7ffe.html


Game On: Intel® x86 and Unity* Contest Challenge Success on the Android* Platform

$
0
0

Download PDF

Power_Up and Talents

Intel recently teamed up with Unity Technologies, maker of the Unity* game development engine, to offer a fun contest that gave game developers an opportunity to build their games with native x86 support for Android* using Unity 5. In February, hundreds of game developers took up the Intel x86 and Unity contest challenge, demonstrating the growing interest in providing native x86 support for the Android platform.

As you may know, x86 support for Android is now available in both Unity 4 and Unity 5. If you’re interested in seeing for yourself how easy it is to add x86 support to your existing Android build in Unity, check out how to produce a fat APK that includes both x86 and ARM* libraries.

 

The Intel® x86 and Unity* Contest

To participate in the Intel x86 and Unity contest, developers simply recompiled their existing games to include x86 support for Android; added the phrase, “Optimized for Intel® x86 mobile devices,” to their app’s description in Google Play; and submitted their entry. Winners were selected at random and awarded one of three prizes: a Unity 5 Pro license, an Acer Iconia* Tab 8 with Android, or an Intel® Solid State Drive 730 Series (240‑GB).

We received about 300 entries, representing a large community of international developers, from Singapore to Mexico. Big production games, indie releases, and even students and hobbyists participated. A wide spectrum of games was submitted—role-player games (RPGs), strategy games, board games, arcade games, racing games, sports games, and action games.

We’re now in the process of shipping these prizes to the lucky winners. Congratulations to the winning developers! Let’s check out some of the games they made.

 

Some of Our Contest Winners

Candy World Quest from Ludic Side Game Studio in Brazil

Candy World Quest from Ludic Side Game Studio in Brazil is a sweet confection of a game. This sugary adventure might appear reminiscent of Candy Crush or Angry Birds at first glance, but Candy World Quest has a charm all its own. In the game, players toss donuts onto the stick of a candy apple to unlock even more challenging, fun, and whimsical stages, with multiple targets and complicated obstacles.

Candy World Quest from Ludic Side Game Studio in Brazil

Another standout contest winner is Farming USA by an American game developer Bowen Games, LLC.

Farming USA by an American game developer Bowen Games

In this farming simulator, attractively rendered in a realistic three-dimensional world, players follow the farming cycle—planting, growing, and harvesting crops—while feeding and raising cows, pigs, sheep, and horses. They can walk or drive anywhere on their farm, sitting in the driver’s seat of one of more than 25 farm vehicles, including tractors, combines, semis, and trucks as they perform their daily tasks.

Farming simulator

MOB: The Prologue is a thrilling adventure from SOGWARE, a game development company based in Korea.

 The Prologue is a thrilling adventure from SOGWARE

This prelude to the Mirror of Bestia (MOB) RPG features epic boss fights and exciting action battles in which pretty anime-style Hunting Girls defend towns on the Soria continent by fighting a series of progressively tougher monsters. As the challenges become more formidable, players can boost the girls’ power by selecting amulets for them to use in their showdowns.

Performance is also supercharged for x86, of course. As SOGWARE enthusiastically noted in its Google Play* app description of the game, “Now Intel x86 mobile devices are supported and optimized thanks to Intel and Unity!”

Mirror of Bestia (MOB) RPG

Crescent Moon Games also joined the party with its entry, Gear Jack Black Hole.

Gear Jack Black Hole

Well known for titles such as Ravensword: The Fallen King, this prolific publisher and developer contributed a game that is as artistic as it is unique. The visuals are truly remarkable, perfectly setting the scene for our hero Jack’s race to escape the black hole or face certain doom. Along this endless runner’s quest, he embarks on a series of missions through fascinating realms and scenes such as Japan, Iceland, a volcanic world, a desert, and an engine room.

 The Fallen King

Conclusion

All in all, it was amazing to witness the buzz and excitement generated among game developers as a result of the contest. Thanks to all the fantastic game developers that entered the Unity competition and shared their awesome games with us. We’re thrilled that you chose to participate. Thanks to Unity, too, for being such an excellent partner in this initiative!

 

Security Best Practices for Android* 5.0

$
0
0

Android is one of the most popular mobile systems in the world. A lot of people use Android devices.

Despite the popularity of Android, enterprises have largely avoided Android devices due to their security risks.

Previous versions of Android contained numerous vulnerabilities. Google has since made extensive security advancements. In addition to supporting data encryption and automatic screen locking, the latest devices limit the privileges of applications to help protect against security breaches.

Another important enhancement is Google's new program for enterprises—Android for Work. It offers enterprise-level security and supports containerization, which is the ability to separate work and personal data on employee Android devices.

These significant enhancements have now made it possible to use Android securely within the enterprise, provided organizations address the remaining inherent security issues. In this article, I will describe four best practices for Android device management.

  • Prevent rooting or jail breaking
  • Protect against mobile malware
  • Enforce robust security measures
  • Implement device management policies

Prevent rooting or jailbreaking

Rooting means unlocking the Android operating system so that users can install unapproved, potentially malicious applications, update the operating system, and replace the firmware, among other things.


Pic: Simple exploit for rooting Lenovo Yoga Tablet

It's a common occurrence that presents significant security challenges for enterprises. Rooted devices are more vulnerable to malicious apps. Rooted devices can expose the corporate network, risking sensitive data and are more susceptible to hacker attacks. Jailbreaking is the process of removing hardware restrictions on Apple iOS* devices through the use of software and hardware exploits. Such devices include the iPhone*, iPod* touch, iPad*, and second-generation Apple TV. Jailbreaking permits root access to the iOS file system and manager, allowing the download of additional applications, extensions, and themes that are unavailable through the official Apple App Store.To prevent the rooting or jailbreaking of Android devices, it is recommended to block rooted devices from connecting to the network and train employees on the dangers and repercussions of rooting their smartphones.

Protect against dangerous mobile software

Android users are able to install applications from any place (not just Google Play) and are consequently exposed to a larger volume of apps that contain malware. This impacts the enterprise because applications targeted by malware can steal login credentials, access the corporate network, and cause critical data loss. The best way to protect the corporate network against mobile malware is to install anti-malware software on approved devices. Here is a list of10 programs to consider using for protection against malicious software:

  1. Dr.Web Antivirus*
  2. Antivirus and Mobile Security* (Avast)
  3. Mobile Security and Antivirus* by ESET
  4. Armor* for Android
  5. AntiVirus Security Free* by AVG
  6. Mobile Security and AntiVirus* by Avast
  7. Zoner* AntiVirus Free
  8. BitDefender* AntiVirus Free
  9. Hornet* AntiVirus Free
  10. Norton* Security Antivirus

In addition, IT needs visibility into all installed applications, detect mobile malware in real time, blacklist vulnerable applications, and leverage a secure enterprise app store or catalog to distribute and update approved applications.

Enforce Robust Security Measures

As with all mobile devices, strong security measures are necessary to protect the corporate network. Although specific policies will vary according to industry, these are our baseline recommendations for Enterprise Mobility Management across all approved devices: require strong passwords, enforce data encryption, control app usage based on Wi-Fi* networks, and block certain functions, including copy/paste, location services, email, camera, and the microphone based on access policies and device location.

Best security practices

In addition, best security practices include:

Security and Data Separation– Devices in Android for Work deployments use hardware-based encryption and admin-managed policies to ensure business data stays separate and safe from malware while personal information stays private.

Support for both employee-owned and company-provisioned devices– Android for Work users can safely use a single Android device for business and personal use, and companies can provision devices they own and configure work profiles on employee-owned devices.

Remote Management– Admins can remotely control all work-related policies, applications, and data, and can wipe them from a device without touching the device owner's personal data.

Seamless User Experience– Android for Work delivers a consistent experience across all devices and lets users intuitively and effortlessly switch between work and personal apps. Business apps appear with personal apps in the launcher and recent apps list, but business app icons have badges that clearly distinguish them.

Simplified Application Deployment– Admins can use Google Play to find, whitelist, and deploy business apps to Android for Work devices. They can even use Google Play to deploy internal applications and resources. (See the Google Play for Work Help Center.)

Divide Productivity Suite– Users who don't have Google Apps for Work can instead use a full suite of secure productivity apps specifically designed for Android for Work. The suite includes business email, calendar, contacts, tasks, and download management.

Google offers an out-of-the-box Android for Work solution with its Google Apps for Work productivity suite. The solution lets Google Apps for Work administrators access EMM functionality in the Admin console that expands their current device management capabilities.

Implement device management policies

IT needs to be able to centrally manage and configure Android devices. It is recommended to remotely wipe lost or stolen devices, automatically wipe devices after a set number of failed unlock attempts, and implement location services that identify device coordinates in real time and enforce access policies accordingly.

Google designed Android and Google Play to provide a safer experience. With that goal in mind, the Android Security team works hard to minimize the security risks on Android devices. Google's multi-layered approach starts with prevention and continues with malware detection and rapid response should any issues arise. More specifically, Google:

  • Strives to prevent security issues from occurring through design reviews, penetration testing and code audits
  • Performs security reviews prior to releasing new versions of Android and Google Play
  • Publishes the source code for Android, thus allowing the broader community to uncover flaws and contribute to making Android the most secure mobile platform
  • Works hard to minimize the impact of security issues with features like the application sandbox
  • Regularly scans Google Play applications for vulnerabilities and security issues and removes them if they pose serious harm to the user devices or data
  • Has a rapid response program in place to handle vulnerabilities found in Android by working with hardware and carrier partners to quickly resolve security issues and push security patches

The Android team works very closely with the wider security research community to share ideas, apply best practices, and implement improvements. Android is part of the Google Patch Reward Program, which pays developers when they contribute security patches to popular open source projects, many of which form the foundation for the Android Open Source Project (AOSP). Google is also a member of the Forum of Incident Response and Security Teams (FIRST).

Related articles and resources

About the Author

Vitaliy Kalinin works in the Software & Services Group at Intel Corporation. He is a PhD student at Lobachevsky State University in Nizhny Novgorod, Russia. He has a Bachelor's degree in economics and mathematics and a Master's degree in applied economics and informatics. His main interest is mobile technologies and game development.


Notices

No computer system can be absolutely secure.

No license (express or implied, by estoppel or otherwise) to any intellectual property rights is granted by this document.

Programming to Offload Image Processing on Android* Applications

$
0
0

Download PDF

1.      Introduction

This article walks through an example Android application that offloads image processing using OpenCL™ and RenderScript programming languages. These programming languages are designed to take advantage of highly parallel graphics hardware (shaders) to compute large data sets and highly repetitive tasks. Although you can use other languages to offload image processing in Android applications, this article shows OpenCL and RenderScript sample code for developing application infrastructure and image processing algorithm code. The OpenCL API wrapper class is also shown and is used to facilitate programming and execution of OpenCL for image processing algorithms. The OpenCL API wrapper class source code is available license-free for anyone to use.

You should be familiar with OpenCL, RenderScript, and Android programming concepts as this article only covers the instructions to offload image processing or media generation computes. You should also have an Android device that is equipped, enabled, and configured to run OpenCL (refer to Intel® SDK for OpenCL for Android device installation).

Note: While other languages and techniques to offload image processing or media generation are available, here, the goal is to only highlight code differences. Future article is planned that will highlight performance differences between OpenCL and RenderScript executing on GPUs.

1.1    Application UI Design

In the sample application, the UI has three radio buttons so that users can quickly switch application execution between RenderScript, OpenCL, or native code. Another menu setting allows users to select whether to run OpenCL on the CPU or GPU. The menu also gives users the list of implemented effects to run, so they can select the effect they want to run. Selecting a device only applies to OpenCL (not RenderScript or native code). Intel® x86 platforms include OpenCL runtime support on both CPU and GPU.

Below is a screenshot of the main UI which shows a version of the plasma effect being processed by OpenCL. The sample application UI shows performance results when running OpenCL, RenderScript, or native code.

The plasma effect being processed by OpenCL

The performance metrics include frame per second (fps), frame render, and effect compute elapsed time. The performance metrics are highlighted in the screenshot below.

 The performance metrics highlighted

Note that performance numbers shown on the screen capture are sample metrics; actual performance metrics will vary depending on the device.

1.1    APIs and SDKs

In addition to the ADT (Android Development Tool which also includes the Android SDK), the main Android based APIs utilized to program the sample application are RenderScript and Intel® SDK for OpenCL for Android applications.

The Intel OpenCL™ SDK is based on and adheres to the OpenCL™ specification which is an open, royalty-free standard for cross-platform programming. For more details, refer to the OpenCL™ standard from the Khronos web site.

RenderScript first became available in 2.2 ADT (API Level 8) and is a framework for running compute-intensive tasks on Android. RenderScript is primarily oriented for use with data-parallel computations, although serial computational workloads can benefit as well. Refer to the Android developer site for more information.

The latest ADT available from Google’s open source repository includes the appropriate packages that need to be imported to use RenderScript, JNI (Java* Native Interface), and runtime APIs. For OpenCL setup, configuration, and runtime refer to this OpenCL Development for Android OS article. For additional programming details see RenderScript or OpenCL.

1.3    Infrastructure Code

The infrastructure code consists of the “main” activity and helper functions. This section highlights helper functions and code for setting up the UI, selecting which effect and language technology to run, and, for OpenCL, which compute device to use.

While several helper functions were implemented to integrate the user selection commands, only two are highlighted here:

The backgroundThread() helper function starts a thread that periodically calls the step process function to process image effects. The code and functionality used in this function is reused from another sample application posted in the Getting Started with RenderScript article, and you can find further details here (PDF).

The processStep() function is called by the backgroundThread() to process and run the image effects. The function relies on a radio button callback function to determine which language to use. The processStep() function invokes the appropriate method to process the image effect using OpenCL, RenderScript, or plain native C/C++ code. Since this code runs on a background thread, users can select a language to run by simply clicking or touching a radio button, even while an effect is being processed. The application dynamically switches to execute the appropriate step render function for a given image effect.

// The processStep() method runs in a separate (background) thread.
private void processStep() {
	try {
		switch (this.type.getCheckedRadioButtonId()) {
		case R.id.type_renderN:
			oclFlag = 0; // OpenCL is OFF
			stepRenderNative();
			break;
		case R.id.type_renderOCL:
			oclFlag = 1; // OpenCL is ON
			stepRenderOpenCL();
			break;
		case R.id.type_renderRS:
		      oclFlag = 0; // OpenCL is OFF
			stepRenderScript();
			break;
		default:
			return;
		}
	} catch (RuntimeException ex) {
		// Handle exception as appropriate and log error
		Log.wtf("Android Image Processing", "render failed", ex);
	}
}

1.4    Java Definition of Native Functions

The sample application implements a NativeLib class, which primarily defines functions that call into the native functionality through JNI to process a given effect. For instance, the sample application implements three effects: plasma, sepia, and monochrome. As such, the class defines the renderPlasma(…), renderSepia(…), and renderMonoChrome(…) functions. These Java functions serve as entry points through JNI to either run native or OpenCL functionality.

The JNI function executes either C/C++ code or sets up and executes the OpenCL program that implements the image effect. The class uses Android bitmap and AssetManager packages. The BitMap objects are utilized to pass and return data for the image or media being processed. The application relies on the AssetManager object to gain access to the OpenCL files (i.e., sepia.cl) where the OpenCL kernels are defined.

Below is the actual NativeLib Java class definition. The //TODO comment is included to illustrate that the application can be easily extended to implement additional image effects.

package com.example.imageprocessingoffload;
import android.content.res.AssetManager;
import android.graphics.Bitmap;

public class NativeLib
{
    // Implemented in libimageeffects.so
    public static native void renderPlasma(Bitmap bitmapIn, int renderocl, long time_ms, String eName, int devtype, AssetManager mgr);

    public static native void renderMonoChrome(Bitmap bitmapIn, Bitmap bitmapOut, int renderocl, long time_ms, String eName, int simXtouch, int simYtouch, int radHi, int radLo, int devtype, AssetManager mgr);

    public static native void renderSepia(Bitmap bitmapIn, Bitmap bitmapOut, int renderocl, long time_ms, String eName, int simXtouch, int simYtouch, int radHi, int radLo, int devtype, AssetManager mgr);

    //TODO public static native <return type> render<Effectname>(…);

    //load actual native library
    static {
        System.loadLibrary("imageeffectsoffloading");
    }
}

Note that the Android AssetManager and BitMap objects are passed for image input and image results to the native code. The AssetManager object is used by native code to be able to access CL files where the OpenCL kernels are defined. The BitMap object is used to make pixel data available for native code to compute and produce image results.

The UI parameter deviceType is used to indicate whether to execute OpenCL on the CPU or the GPU. The Android system must be configured and capable of running OpenCL on both devices. Modern Intel® Atom™ and Intel® Core™ processors can run OpenCL on the CPU and the integrated graphics processor or GPU.

The eName parameter is passed to indicate which OpenCL kernel to compile and run. Although the sample application implements a JNI function per image effect, this might appear unnecessary. However, it is possible to define multiple related image effects in a single CL file and/or JNI function. In such cases the eName would be used to compile and load the appropriate CL program and/or kernel.

The renderocl parameter is used as a flag that indicates whether to run OpenCL or native C/C++ code. This flag is only set when a user selects the OpenCL radio button; otherwise, it remains unset.

The time_ms parameter is used to pass a time stamp (milliseconds), which is used to calculate the performance metrics. In the plasma effect the time stamp is used to calculate the plasma effect stepping.

Other arguments are specific to the image effect algorithm to render the effect radially from the center of the image. For example, the simXtouch, simYtouch, radLo, and radHi parameters along with the width and height are used to calculate and show radial progress of the monochrome and sepia effects.

1.5    Definitions and Resources to run Native Code (C or OpenCL)

This section includes the JNI native function definitions for each effect implemented in the sample application. As previously mentioned, one function per effect is utilized to simplify the explanation and illustrate the functional elements used to offload image effect processing with OpenCL. The C or serial code is referenced and code snippets are also included in hopes that a future version of the sample application will be used to assess performance between these language technologies.

The JNI functions have a 1:1 relationship with the Java native functions. So it is very important to have the correct declaration and definition of the JNI counterpart functions. The Java SDK includes the javah tool that helps generate the correct and exact JNI function declaration. This tool is highly recommended to avoid the struggles that could result when code compiles correctly but produces errors at runtime.

Below are the JNI functions for the “image effects offloading” in sample application. The JNI function signatures were generated by the javah tool utility.

// Defines new JNI entry function signatures
#ifndef _Included_com_example_imageprocessingoffload_NativeLib
#define _Included_com_example_imageprocessingoffload_NativeLib
#ifdef __cplusplus
extern "C" {
#endif
/*
 * Class:     com_example_imageprocessingoffload_NativeLib
 * Method:    renderPlasma
 * Signature: (Landroid/graphics/Bitmap;IJLjava/lang/String;)Ljava/lang/String;
 */
JNIEXPORT void JNICALL Java_com_example_imageprocessingoffload_NativeLib_renderPlasma
  (JNIEnv *, jclass, jobject, jint, jlong, jstring, jint, jobject);

/*
 * Class:     com_example_imageprocessingoffload_NativeLib
 * Method:    renderMonoChrome
 * Signature: (Landroid/graphics/Bitmap;Landroid/graphics/Bitmap;IJLjava/lang/String;)Ljava/lang/String;
 */
JNIEXPORT void JNICALL Java_com_example_imageprocessingoffload_NativeLib_renderMonoChrome
  (JNIEnv *, jclass, jobject, jobject, jint, jlong, jstring, jint, jint, jint, jint, jint, jobject);

/*
 * Class:     com_example_imageprocessingoffload_NativeLib
 * Method:    renderSepia
 * Signature: (Landroid/graphics/Bitmap;Landroid/graphics/Bitmap;IJLjava/lang/String;)Ljava/lang/String;
 */
JNIEXPORT void JNICALL Java_com_example_imageprocessingoffload_NativeLib_renderSepia
  (JNIEnv *, jclass, jobject, jobject, jint, jlong, jstring, jint, jint, jint, jint, jint, jobject);
}
#endif

The javah tool can generate the correct JNI function signatures; however, the class or classes that define the Java native function must already be compiled in your Android application project. If a header file is to be generated, the javah command can be used as follows:

     {javahLocation} -o {outputFile} -classpath {classpath} {importName}

For the sample application the function signatures were generated as:

      javah -o junk.h -classpath bin\classes com.example.imageprocessingoffloading.NativeLib

The JNI function signatures in junk.h were then added to the imageeffects.cpp, which has the functionality to set up and run OpenCL or C code. Next we allocate resources to be able run OpenCL or native code for the implemented effects: plasma, monochrome, and sepia.

     1.5.1    Plasma Effect

The Java_com_example_imageprocessingoffload_NativeLib_renderPlasma(…) function is the entry code to execute either OpenCL or native code for the plasma effect. The functions startPlasmaOpenCL(…), runPlasmaOpenCL(…), and runPlasmaNative(…) are external to the imageeffects.cpp code and are defined in a separate plasmaEffect.cpp source file. For reference, you can find the plasmaEffect.cpp source file in the OpenCL wrapper class code download.

The renderPlasma(…) entry function utilizes the OpenCL wrapper class to query the Android device system for OpenCL support. It calls the wrapper class function ::initOpenCL(…) to initialize the OpenCL environment. The device type passes CPU or GPU as the device to create the OpenCL context. The Android asset manager uses the ceName parameter to identify and load the CL file for the kernel code to compile.

If and when the OpenCL environment is successfully set up, the renderPlasma(…) entry function calls the startPlasmaOpenCL() function to allocate OpenCL resources and start execution of the plasma OpenCL kernel. Note that gOCL is a global variable that holds the object instance of the OpenCL wrapper class. The gOCL variable is visible to all JNI entry functions. This way the OpenCL environment can be initialized by any of the programmed effects.

The plasma effect does not use images, media rendered on the screen is generated by the programmed algorithm. The bitmapIn parameter is a BitMap object that holds the media that is generated by the plasma effect. The pixels parameter passed in the startPlasma(…) function is mapped to the bitmap texture and is used by the native or OpenCL kernel code to read and write pixel data for textures to render on the screen. Once again, the assetManager object is used to access the CL file that contains the OpenCL kernel for the plasma effect.

JNIEXPORT voidJava_com_example_imageprocessingoffload_NativeLib_renderPlasma(JNIEnv * env, jclass, jobject bitmapIn, jint renderocl, jlong time_ms, jstring ename, jint devtype, jobject assetManager) {

JNIEXPORT void Java_com_example_imageprocessingoffload_NativeLib_renderPlasma(JNIEnv * env, jclass, jobject bitmapIn, jint renderocl, jlong time_ms, jstring ename, jint devtype, jobject assetManager) {
… // code omitted to simplify

    // code locks mem for BitMapIn and sets “pixels” pointer that is passed to OpenCL or Native functions.
    ret = AndroidBitmap_lockPixels(env, bitmapIn, &pixels);

… // code omitted to simplify

  If OCL not initialized
     AAssetManager *amgr = AAssetManager_fromJava(env, assetManager);
     gOCL.initOpenCL(clDeviceType, ceName, amgr);
     startPlasmaOpenCL((cl_ushort *) pixels, infoIn.height, infoIn.width, (float) time_ms, ceName, cpinit);
 else
     runPlasmaOpenCL(infoIn.width, infoIn.height, (float) time_ms, (cl_ushort *) pixels);
… // code omitted
}

The startPlasmaOpenCL(…) external function generates and populates the Palette and Angles buffers that contain data needed for the plasma effect. To start running the plasma OpenCL kernel, the function relies on OpenCL command queue, context, and kernel, which are defined as data members of wrapper class.

The runPlasmaOpenCL(…) function runs the plasma OpenCL kernel continually. A separate function is utilized once the OpenCL kernel gets started, and subsequent kernel executions only need a new time stamp value as input. Only the kernel argument for the time stamp value needs to be sent for the next kernel run iteration, hence the need for a separate function.

extern int startPlasmaOpenCL(cl_ushort* pixels, cl_int height, cl_int width, cl_float ts, const char* eName, int inittbl);
extern int runPlasmaOpenCL(int width, int height, cl_float ts, cl_ushort *pixels);
extern void runPlasmaNative( AndroidBitmapInfo*  info, void*  pixels, double  t, int inittbl );

The runPlasmaNative(…) function contains the plasma algorithm logic written in C code. The inittbl argument is used as Boolean to indicate whether the Palette and Angles data needed by the plasma effect needs to be generated or not. The OpenCL kernel code for the plasma effect can be found in the plasmaEffect.cpp source file.

#define FBITS		16
#define FONE		(1 << FBITS)
#define FFRAC(x)	((x) & ((1 << FBITS)-1))
#define FIXED_FROM_FLOAT(x)  ((int)((x)*FONE))

/* Color palette used for rendering plasma */
#define  PBITS   8
#define  ABITS   9
#define  PSIZE   (1 << PBITS)
#define  ANGLE_2PI (1 << ABITS)
#define  ANGLE_MSK (ANGLE_2PI - 1)

#define  YT1_INCR  FIXED_FROM_FLOAT(1/100.0f)
#define  YT2_INCR  FIXED_FROM_FLOAT(1/163.0f)
#define  XT1_INCR  FIXED_FROM_FLOAT(1/173.0f)
#define  XT2_INCR  FIXED_FROM_FLOAT(1/242.0f)

#define  ANGLE_FROM_FIXED(x)	((x) >> (FBITS - ABITS)) & ANGLE_MSK

ushort pfrom_fixed(int x, __global ushort *palette)
{
    if (x < 0) x = -x;
    if (x >= FONE) x = FONE-1;
    int  idx = FFRAC(x) >> (FBITS - PBITS);
    return palette[idx & (PSIZE-1)];
}

__kernel
void plasma(__global ushort *pixels, int height, int width, float t, __global ushort *palette, __global int *angleLut)
{
    int yt1 = FIXED_FROM_FLOAT(t/1230.0f);
    int yt2 = yt1;
    int xt10 = FIXED_FROM_FLOAT(t/3000.0f);
    int xt20 = xt10;

    int x = get_global_id(0);
    int y = get_global_id(1);
    int tid = x+y*width;

    yt1 += y*YT1_INCR;
    yt2 += y*YT2_INCR;

    int base = angleLut[ANGLE_FROM_FIXED(yt1)] + angleLut[ANGLE_FROM_FIXED(yt2)];
    int xt1 = xt10;
    int xt2 = xt20;

    xt1 += x*XT1_INCR;
    xt2 += x*XT2_INCR;

    int ii = base + angleLut[ANGLE_FROM_FIXED(xt1)] + angleLut[ANGLE_FROM_FIXED(xt2)];
    pixels[tid] = pfrom_fixed(ii/4, palette);
}
The RenderScript kernel code for the plasma effect:

#pragma version(1)
#pragma rs java_package_name(com.example.imageprocessingoffload)

rs_allocation *gPalette;
rs_allocation *gAngles;
rs_script gScript;
float ts;
int gx;
int gy;

static int32_t intFromFloat(float xfl) {
      return (int32_t)((xfl)*(1 << 16));
}
const float YT1_INCR = (1/100.0f);
const float YT2_INCR = (1/163.0f);
const float XT1_INCR = (1/173.0f);
const float XT2_INCR = (1/242.0f);

static uint16_t pfrom_fixed(int32_t dx) {
    unsigned short *palette = (unsigned short *)gPalette;
    uint16_t ret;
    if (dx < 0)  dx = -dx;
    if (dx >= (1 << 16))  dx = (1 << 16)-1;

    int  idx = ((dx & ((1 << 16)-1)) >> 8);
    ret = palette[idx & ((1<<8)-1)];
    return ret;
}

uint16_t __attribute__((kernel)) root(uint16_t in, uint32_t x, uint32_t y) {
    unsigned int *angles = (unsigned int *)gAngles;
    uint32_t out = in;
    int yt1 = intFromFloat(ts/1230.0f);

    int yt2 = yt1;
    int xt10 = intFromFloat(ts/3000.0f);
    int xt20 = xt10;

    int y1 = y*intFromFloat(YT1_INCR);
    int y2 = y*intFromFloat(YT2_INCR);
    yt1 = yt1 + y1;
    yt2 = yt2 + y2;

    int a1 = (yt1 >> 7) & ((1<<9)-1);
    int a2 = (yt2 >> 7) & ((1<<9)-1);
    int base = angles[a1] + angles[a2];

    int xt1 = xt10;
    int xt2 = xt20;
    xt1 += x*intFromFloat(XT1_INCR);
    xt2 += x*intFromFloat(XT2_INCR);

    a1 = (xt1 >> (16-9)) & ((1<<9)-1);
    a2 = (xt2 >> (16-9)) & ((1<<9)-1);
    int ii = base + angles[a1] + angles[a2];

   out = pfrom_fixed(ii/4);
   return out;
}
void filter(rs_script gScript, rs_allocation alloc_in, rs_allocation alloc_out) {
    //rsDebug("Inputs TS, X, Y:", ts, gx, gy);
    rsForEach(gScript, alloc_in, alloc_out);
}

     1.5.2    Monochrome Effect

The Java_com_example_imageprocessingoffload_NativeLib_renderMonochrome(…) function is the entry code to execute either OpenCL or native code for the monochrome processing. The functions executeMonochromeOpenCL(…), and executeMonochromeNative(…) are external to the imageeffects.cpp code and are defined in a separate source file. As with the plasma effect, this entry function also utilizes the OpenCL wrapper class to query the Android device system for OpenCL support and calls the function ::initOpenCL(…) to initialize the OpenCL environment.

The following two lines of code simply extern (or make visible to the NDK compiler) the function signature of the executeMonochromeOpenCL(…), and executeMonochromeNative(…) functions. These lines are necessary as these functions are defined in a separate source file.

extern int executeMonochromeOpenCL(cl_uchar4 *srcImage, cl_uchar4 *dstImage, int radiHi, int radiLo, int xt, int yt, int nWidth, int nHeight);
extern int executeMonochromeNative(cl_uchar4 *srcImage, cl_uchar4 *dstImage, int radiHi, int radiLo, int xt, int yt, int nWidth, int nHeight);

Unlike the plasma effect, this effect uses an input and an output image. Both bitmapIn and bitmapOut are allocated as ARGB_8888 bitmaps, and both are mapped to CL buffers of cl_uchar4 vectors. Note that pixelsIn and pixelsOut are typecasted as this is necessary for OpenCL to map the BitMap objects to buffers of cl_uchar4 vectors.

JNIEXPORT void JNICALL Java_com_example_imageprocessingoffload_NativeLib_renderMonochrome(JNIEnv * env, jclass obj, jobject bitmapIn, jobject bitmapOut, jint renderocl, jlong time_ms, jstring ename, jint xto, jint yto, jint radHi, jint radLo, jint devtype, jobject assetManager)  {

  … // code omitted for simplification

   // code locks mem for BitMapIn and sets “pixelsIn” pointer that is passed to OpenCL or Native functions.
   ret = AndroidBitmap_lockPixels(env, bitmapIn, &pixelsIn);

   // code locks mem for BitMapOut and sets “pixelsOut” pointer that is passed to OpenCL or Native functions.
   ret = AndroidBitmap_lockPixels(env, bitmapOut, &pixelsOut);

 … // code omitted for simplification
 If OpenCL
   If OCL not initialized
     AAssetManager *amgr = AAssetManager_fromJava(env, assetManager);
     gOCL.initOpenCL(clDeviceType, ceName, amgr);
   else
     executeMonochromeOpenCL((cl_uchar4*) pixelsIn,(cl_uchar4*) pixelsOut, radiHi, radiLo, xt, yt, infoIn.width, infoIn.height);
    // end of OCL initialized
else
   executeMochromeNative((cl_uchar4*) pixelsIn,(cl_uchar4*) pixelsOut, radiHi, radiLo, xt, yt, infoIn.width, infoIn.height);
// End of OpenCL
… // code omitted
}

When executeMonochromeOpenCL(…) is called, the function typecasts and passes pixelsIn and pixelsOut as cl_uchar4 buffers. The function uses OpenCL APIs to create buffers and other resources as appropriate. It sets kernel arguments and queues up necessary commands to execute the OpenCL kernel. The image input buffer which is pointed to by pixelsIn is allocated as a read_only buffer. The kernel code uses the pixelsIn pointer to get incoming pixel data. The pixel data is used by the kernel algorithm to convert the incoming image to a monochrome image. The output buffer is read_write buffer that holds the image results and is pointed to by pixelsOut. For further details on OpenCL refer to Intel’s programming and optimization guide.

The executeMonochromeNative(…) function has the monochrome algorithm programmed in C code. The algorithm is basic and consists of an outer loop (for y loop) and inner loop (for x loop) to compute the pixel data whose result is stored in dstImage pointed to by pixelsOut. The srcImage pointed to by pixlesIn is used to dereference input pixel data for the algorithm formula to convert to monochrome pixels.

The OpenCL kernel code for the monochrome effect:

constant uchar4 cWhite = {1.0f, 1.0f, 1.0f, 1.0f};
constant float3 channelWeights = {0.299f, 0.587f, 0.114f};
constant float saturationValue = 0.0f;

__kernel void mono (__global uchar4 *in, __global uchar4 *out, int4 intArgs, int width) {
    int x = get_global_id(0);
    int y = get_global_id(1);

    int xToApply = intArgs.x;
    int yToApply = intArgs.y;
    int radiusHi = intArgs.z;
    int radiusLo = intArgs.w;
    int tid = x + y * width;
    uchar4 c4 = in[tid];
    float4 f4 = convert_float4 (c4);
    int xRel = x - xToApply;
    int yRel = y - yToApply;
    int polar = xRel*xRel + yRel*yRel;

    if (polar > radiusHi || polar < radiusLo)   {
        if (polar < radiusLo)   {
            float4 outPixel = dot (f4.xyz, channelWeights);
            outPixel = mix ( outPixel, f4, saturationValue);
            outPixel.w = f4.w;
            out[tid] = convert_uchar4_sat_rte (outPixel);
        }
        else  {
            out[tid] = convert_uchar4_sat_rte (f4);
        }
    }
    else   {
         out[tid] = convert_uchar4_sat_rte (cWhite);
    }
}

The RenderScript kernel code for the monochrome effect:

#pragma version(1)
#pragma rs java_package_name(com.example.imageprocessingoffload)

int radiusHi;
int radiusLo;
int xToApply;
int yToApply;

const float4 gWhite = {1.f, 1.f, 1.f, 1.f};
const float3 channelWeights = {0.299f, 0.587f, 0.114f};
float saturationValue = 0.0f;

uchar4 __attribute__((kernel)) root(const uchar4 in, uint32_t x, uint32_t y)
{
    float4 f4 = rsUnpackColor8888(in);
    int xRel = x - xToApply;
    int yRel = y - yToApply;
    int polar = xRel*xRel + yRel*yRel;
    uchar4 out;

    if(polar > radiusHi || polar < radiusLo) {
        if(polar < radiusLo) {
            float3 outPixel = dot(f4.rgb, channelWeights);
            outPixel = mix( outPixel, f4.rgb, saturationValue);
            out = rsPackColorTo8888(outPixel);
        }
        else {
            out = rsPackColorTo8888(f4);
        }
    }
    else {
         out = rsPackColorTo8888(gWhite);
    }
    return out;
}

     1.5.3    Sepia Effect

The code for the sepia effect is very similar to the code for the monochrome effect. The only difference is in the algorithm calculation of the pixels. Different formula and constants are used to arrive at the resultant pixel data. Here are the function declarations for the sepia effect to run OpenCL and native C code. As you can see, the function declarations and definitions, if not for the name difference, are identical.

externintexecuteSepiaOpenCL(cl_uchar4 *srcImage, cl_uchar4 *dstImage, it int radiHi, int radiLo, int xt, int yt, int nWidth, int nHeight);

externintexecuteSepiaNative(cl_uchar4 *srcImage, cl_uchar4 *dstImage, int radiHi, int radiLo, int xt, int yt, int nWidth, int nHeight);

JNIEXPORT jstring JNICALL Java_com_example_imageprocessingoffload_NativeLib_renderSepia(JNIEnv * env, jclass obj, jobject bitmapIn, jobject bitmapOut, jint renderocl, jlong time_ms, jstring ename, jint xto, jint yto, jint radHi, jint radLo, jint devtype, jobject assetManager) { … }

Source code snippets in Java_com_example_imageprocessingoffload_NativeLib_renderSepia(…) are very similar to the monochrome sample and are therefore omitted.

When executeSepiaOpenCL(…) is called, the function typecasts and passes pixelsIn and pixelsOut as cl_uchar4 buffers. The function uses OpenCL APIs to create buffers and other resources as appropriate. It sets kernel arguments and queues up necessary commands to execute the OpenCL kernel. The image input buffer which is pointed to by pixelsIn is allocated as a read_only buffer. The kernel code uses the pixelsIn buffer pointer to get pixel data. The pixel data is used by the kernel algorithm to convert the incoming image to a monochrome image. The output buffer is a read_write buffer that holds the image results and is pointed to by pixelsOut.

The executeSepiaNative(…) function has the sepia algorithm programmed in C code. The algorithm is basic and consists of an outer loop (for y loop) and inner loop (for x loop) to compute the pixel data whose result is stored in dstImage pointed to by pixelsOut. The srcImage pointed to by pixlesIn is used to dereference input pixel data for the algorithm formula to convert to monochrome pixels.

The OpenCL kernel code for the sepia effect

constant uchar4 cWhite = {1, 1, 1, 1};
constant float3 sepiaRed = {0.393f, 0.769f, 0.189f};
constant float3 sepiaGreen = {0.349f, 0.686f, 0.168f};
constant float3 sepiaBlue = {0.272f, 0.534f, 0.131f};

__kernel void sepia(__global uchar4 *in, __global uchar4 *out, int4 intArgs, int2 wh)
{
    int x = get_global_id(0);
    int y = get_global_id(1);
    int width = wh.x;
    int height = wh.y;

    if(width <= x || height <= y) return;

    int xTouchApply = intArgs.x;
    int yTouchApply = intArgs.y;
    int radiusHi = intArgs.z;
    int radiusLo = intArgs.w;
    int tid = x + y * width;

    uchar4 c4 = in[tid];
    float4 f4 = convert_float4(c4);
    int xRel = x - xTouchApply;
    int yRel = y - yTouchApply;
    int polar = xRel*xRel + yRel*yRel;

    uchar4 pixOut;

    if(polar > radiusHi || polar < radiusLo)
    {
        if(polar < radiusLo)
        {
        	float4 outPixel;
            float tmpR = dot(f4.xyz, sepiaRed);
            float tmpG = dot(f4.xyz, sepiaGreen);
            float tmpB = dot(f4.xyz, sepiaBlue);

            outPixel = (float4)(tmpR, tmpG, tmpB, f4.w);
            pixOut = convert_uchar4_sat_rte(outPixel);
        }
        else
        {
            pixOut= c4;
        }
    }
    else
    {
         pixOut = cWhite;
    }
    out[tid] = pixOut;
}

The RenderScript kernel code for the sepia effect:

#pragma version(1)
#pragma rs java_package_name(com.example.imageprocessingoffload)
#pragma rs_fp_relaxed

int radiusHi;
int radiusLo;
int xTouchApply;
int yTouchApply;

rs_script gScript;
const float4 gWhite = {1.f, 1.f, 1.f, 1.f};

const static float3 sepiaRed = {0.393f, 0.769f, 0.189f};
const static float3 sepiaGreen = {0.349f, 0.686, 0.168f};
const static float3 sepiaBlue = {0.272f, 0.534f, 0.131f};

uchar4 __attribute__((kernel)) sepia(uchar4 in, uint32_t x, uint32_t y)
{
    uchar4 result;
    float4 f4 = rsUnpackColor8888(in);

    int xRel = x - xTouchApply;
    int yRel = y - yTouchApply;
    int polar = xRel*xRel + yRel*yRel;

    if(polar > radiusHi || polar < radiusLo)
    {
    	if(polar < radiusLo)
      {
        	float3 out;

        	float tmpR = dot(f4.rgb, sepiaRed);
        	float tmpG = dot(f4.rgb, sepiaGreen);
        	float tmpB = dot(f4.rgb, sepiaBlue);

        	out.r = tmpR;
        	out.g = tmpG;
        	out.b = tmpB;
        	result = rsPackColorTo8888(out);
        }
        else
        {
            result = rsPackColorTo8888(f4);
        }
    }
    else
    {
         result = rsPackColorTo8888(gWhite);
    }
    return result;
}

1.6    Code and Resources to Run RenderScript

What does a RenderScript implementation need to execute image effects? While not a rule or even a recommendation, the sample application uses common resources and variables defined in the global scope for the sake of simplicity. Android developers can use different methods to define common resources based on the application’s complexity.

The following common resources and global variables are declared and defined in the MainActivity.java source file.

private RenderScript rsContext;

The rsContext variable is common to all scripts and is used to store the RenderScript context. The context is set up as part of the RenderScript framework. To learn more about the inner-workings, please refer to the RenderScript framework.

private ScriptC_plasma plasmaScript;
private ScriptC_mono monoScript;
private ScriptC_sepia sepiaScript;

The plasmaScript, monoScript, and sepiaScript variables are instances of the class that wraps access to the specific RenderScript kernels. Eclipse* IDE automatically generates the Java class from the rs files i.e., ScriptC_plasma from plasma.rs, ScriptC_mono from mono.rs, and ScriptC_sepia from sepia.rs. Specific RenderScript wrapper classes are generated and can be located in Java files under the gen folder. For example, for the sepia.rs file, the Java class is found in the ScriptC_sepia.java file. To generate the Java code, the rs file must completely define the RenderScript kernel code and be syntactically correct to compile. For the sample application, all ScriptC_<*> classes were imported in the MainActivity.java code.
private Allocation allocationIn;
private Allocation allocationOut;
private Allocation allocationPalette;
private Allocation allocationAngles;

Allocations are memory abstractions that RenderScript kernels operate on. For example, allocationIn and allocationOut hold texture data for input and output images. The input to the script in the sample application is AllocationIn, and AllocationOut is the output that holds image data produced by the RenderScript kernel or kernels. The Palette and Angles allocations are used to pass angle and lookup table data to the kernel. The data is generated in the main activity code prior to invoking the RenderScript for the plasma effect. The Palette and Angles data is needed to produce the plasma effect media.

The code to glue resources and generated code together to run RenderScript kernels is defined in the initRS(…) helper function for the sample application.

protected void initRS() { … };

The initRS() function initializes the RenderScript context via the create method of the RenderScript object. As previously stated, the context handle is common to all render scripts and is stored in the rsContext global variable. A RenderScript context is required by the instantiation of a RenderScript object. The following line of code creates the RenderScript context in the scope of the sample application MainActivity, hence “this” is passed on the RenderScript.create(…) method call.

rsContext = RenderScript.create(this);

Once the RenderScript context is created, the specific application RenderScript object required to execute the kernel code is allocated. The following lines of source code show the logic in the initRS() function that instantiates RenderScript objects as appropriate.

if (effectName.equals("plasma")) {
plasmaScript = new ScriptC_plasma(rsContext);
} else if (effectName.equals("mono")) {
	monoScript = new ScriptC_mono(rsContext);
} else if (effectName.equals("sepia")) {
	sepiaScript = new ScriptC_sepia(rsContext);
} // add here to add additional effects to the application

The stepRenderScript(…) is a helper function that is called to run RenderScript for a given effect. It uses the RenderScript object to set required parameters and to invoke the RenderScript kernel. The source code below is part of the stepRendeScript(…) function and shows how the RenderScript kernels are invoked for the plasma and monochrome effects.

private void stepRenderScript(…) {

 … // code omitted for simplification
 if(effectName.equals("plasma")) {
	plasmaScript.bind_gPalette(allocationPalette);
	plasmaScript.bind_gAngles(allocationAngles);
	plasmaScript.set_gx(inX - stepCount);
	plasmaScript.set_gy(inY - stepCount);
	plasmaScript.set_ts(System.currentTimeMillis() - mStartTime);
	plasmaScript.set_gScript(plasmaScript);
	plasmaScript.invoke_filter(plasmaScript, allocationIn, allocationOut);
 }
 else if(effectName.equals("mono")) {
// Compute parameters "circle of effect" depending on number of elapsed steps.
	int radius = (stepApply == -1 ? -1 : 10*(stepCount - stepApply));
	int radiusHi = (radius + 2)*(radius + 2);
	int radiusLo = (radius - 2)*(radius - 2);
	// Setting parameters for the script.
	monoScript.set_radiusHi(radiusHi);
	monoScript.set_radiusLo(radiusLo);
	monoScript.set_xInput(xToApply);
	monoScript.set_yInput(yToApply);
	// Run the script.
	monoScript.forEach_root(allocationIn, allocationOut);
	if(stepCount > FX_COUNT)
	{
		stepCount = 0;
		stepApply = -1;
	}
 }
 else if(effectName.equals("sepia")) {… // code similar to mono effect
 }
 … // code omitted for simplification

};

The gPalette, gAngles, gx, gy, and gScript are global variables defined in the plasma RenderScript kernel. The RenderScript framework generates functions to pass required data to the kernel runtime. All variables are declared in the plasma.rs file. Variables defined as rs_allocation generate bind_<var> function. For the plasma effect the bind_<gvars> functions are generated to bind the Palette and Angles data to the RenderScript context. For scalar arguments such as gx, gy, ts, and gScript a set_<var> method are generated to send specific data for that parameter. The scalar parameters are used to send running x, y values and the time stamp needed by the plasma RenderScript kernel. The invoke_filter(…) function is generated based on the RenderScript definition. Definition of user functions like the filter() function in the plasma script is a way to program configurable and/or reusable RenderScript kernel code.

For the monochrome effect, the radius is used to calculate the radiusHi and radiusLo arguments. These, along with the xInput and yInput, are used to calculate and show radial progress of the monochrome effect. Note that for the monochrome script, instead of invoking a user function the forEach_root() is called directly. The forEach_root(…) is the default method and is generated by the framework for render scripts. Note that the radiusHi, radiusLo, xInput, and yInput are defined as global variables in the kernel code and that set_<var> methods are generated to pass required data to the RenderScript kernel.

For more help, refer to the RenderScript source code definitions.

 

2.      OpenCL Wrapper Class

The wrapper class provides functions for OpenCL APIs to compile and execute OpenCL kernels. It also provides wrapper functions for APIs to initialize the OpenCL runtime. The wrapper class intent is to facilitate initialization and setting of the runtime environment to execute OpenCL kernels. The following is a brief description and use of each method in the wrapper class. Use the Download link to get full source of the OpenCL wrapper class.

class openclWrapper {
private:
cl_device_id* mDeviceIds;	// Holds OpenCL device Ids (CPU, GPU, etc...)
	cl_kernel mKernel;		// Holds handle for kernel to run
	cl_command_queue mCmdQue;	// Holds command queue for CL device
	cl_context mContext;		// Holds OpenCL context
	cl_program mProgram;		// Holds OpenCL program handle

public:
	openclWrapper() {
		mDeviceIds = NULL;
		mKernel = NULL;
		mCmdQue = NULL;
		mContext = NULL;
		mProgram = NULL;
	};
	~openclWrapper() { };
	cl_context getContext() { return mContext; };
	cl_kernel getKernel() { return mKernel; };
	cl_command_queue getCmdQue() { return mCmdQue; };

	int createContext(cl_device_type deviceType);
	bool LoadInlineSource(char* &sourceCode, const char* eName);
	bool LoadFileSource(char* &sourceCode, const char* eName, AAssetManager *mgr);
	int buildProgram(const char* eName, AAssetManager *mgr);
	int createCmdQueue();
	int createKernel(const char *kname);
	// overloaded function
	int initOpenCL(cl_device_type clDeviceType, const char* eName, AAssetManager *mgr=NULL);
};
  • ::createContext(cl device) function - is a helper function that uses device type (e.g., CPU or GPU) to validate OpenCL support and to get device Id from the system. This function uses device Id to create the OpenCL execution context. The function is called as part of the OpenCL initialization steps. It returns SUCCESS and sets the class context handle i.e., mContext, or returns FAIL if platform or device Id enumeration and/or creation of the context itself fail.
  • ::createCmdQue() function - enumerates the number of devices associated with the CL context. It relies on the private data member mContext to create the command queue. Returns SUCCESS and sets command queue handle, i.e., mCmdQue, or returns FAIL if unable to create the command queue for a device id previously enumerated by the createContext(…) function.
  • ::buildProgram(effectName, AssetManager) function - is an overloaded function that takes the image processing algorithm name also defined as the effect name and a pointer to the Android JNI asset manager. The asset manager uses the effect name to locate and read the OpenCL file that contains the kernel source code. The wrapper class also uses the effect name to locate and load “inline” OpenCL source code. The function is overloaded as its declaration sets the asset manager pointer to NULL by default. Essentially this function can be invoked only with the effect name or with the effect and a valid pointer to the asset manager to decide when to compile inline-defined OpenCL code or load the OpenCL code from a separate file. This allows the programmer to define and deploy OpenCL programs as inline strings or in separate OpenCL files. The asset manager pointer value is used to invoke either a function that loads the OpenCL program from a string or invoke a function that uses the asset manager APIs to read OpenCL source into buffer.
    • The buildProgram(…) function invokes the OpenCL API clCreateProgramWithSource(…) to create a program with source. The create program with source API returns errors and will fail to create the program if the OpenCL source code has syntax errors. The OpenCL context and source buffer are passed as arguments. The clCreateProgramWithSource(…) returns the program handle if the CL program compile successful.
    • The clBuildProgram(…) API takes the program handle that was created by the clCreateProgramWithSource(…) or clCreateProgramWithBinary(…) APIs. The clBuidProgram(…) is called to compile and link the program executable that would run on the CL device. In case of errors, you can use clGetProgramBuildInfo(…) to dump the compile errors. For an example refer to wrapper class source code.
  • ::createKernel(…) function - takes the effect name and uses the program object to create the kernel. If kernel creation is successful, the function returns SUCCESS. A valid kernel handle is stored in mKernel, which is subsequently used to set kernel arguments and to execute the OpenCL kernel that implements the image processing algorithm.
  • The ::getContext(),::getCmdQue(), and ::getKernel() methods simply return the context, command queue, and kernel handles. These handles are used in the JNI functions to be able to queue up required commands to run OpenCL kernels.

 

3.      Summary

This article highlighted some of the OpenCL techniques and procedures you can use to offload image processing in Android applications. Similar to RenderScript, OpenCL is a viable and powerful technology to offload your image processing workloads. As more devices support OpenCL, it is good to know that this language technology can offload, and hopefully speed up, your image processing workloads. For more information, refer to the Intel SDK for OpenCL documentation.

 

4.      About the author

Eli Hernandez is an Application Engineer in the Consumer Client and Power Enabling Group at Intel Corporation where he works with customers to optimize their software for power efficiency and to run best on Intel hardware and software technologies. Eli joined Intel in August of 2007 with over 12 years of experience in software development for the telecom and the chemical industry. He received his B.S. in Electrical Engineering in 1989 and completed Master Studies in Computer Science in 1991-1992 from the DePaul University of Chicago.

Using multi-window feature as differentiation on Android*

$
0
0

Download PDF[PDF 1.2MB]

Overview

Multi-window is a feature in the Android* OS that can differentiate your apps. Many OEM and ODMs such as Samsung, Ramos, and Huawei use the feature to promote their products and make them stand out from the rest. In this article, we will introduce the multi-window function and show you how to implement it in your apps.

Multi-window use cases

Figure 1. Multi-window use cases

Introduction

In June, 2012, the first open source multi-tasking framework, called Cornerstone, was developed. In August 2012, Samsung launched the first multi-window commercial product. And from 2013 until now, there has been an explosion of multi-window solutions on the market (see Figure 2).

Multi-window evolution

Figure 2. Multi-window evolution

The two types of multi-window styles are: floating style and docked style. The multi-window feature usually includes the open/close, resize, and swap functions. The open/close function starts/stops the feature. The resize function allows users to change the window sizes. The swap function exchanges the window positions.

Multi-window window styles

Figure 3. Multi-window window styles

In 2013, many solutions, developed by OEM/ODMs, by ISVs, and by the open-source community, emerged on the market. The following table compares the different multi-window solutions.

FeatureCornerstoneStandoutXposedTieto
DescriptionMulti-tasking framework for the Android* OSAn open source library that can be used to create floating appsA multi-window application that supports docked windows styleThe project aims to create desktop-like user experience
Open/close , resize, maximizeSupportedSupportedSupportedSupported
Windows styleDockedFloatingDockedDocked/Floating
Code modificationAndroid frameworkApplication layerAndroid frameworkAndroid Framework
Application supportSupport all applications, but SurfaceView cannot dynamically be adjustedSome assistant applications, for example, calculator, etc.Application compatibility and stability need improvementSupport all applications
Android versionAndroid 4.1 ~Android 4.4Android 4.1~
Android 4.4
Android 4.4Android 4.4
Official websitehttp://www.onskreen.comhttp://forum.xda-developers.com/showthread.php?t=1688531http://forum.xda-developers.com/xposedhttps://github.com/tieto/multiwindow_for_android

Software Architecture

You can modify the Android framework code to adapt more functions. The Android OS architecture is divided into layers.

For Android 4.2 and Android 4.3, the launcher and other applications are all run on one stack, called “main stack”. As we know, multi-window needs more stacks to contain the multiple windows, so we need to modify the class ActivityManagerService of the framework to add stack creation and stack management interface. To modify the class WindowManagerService of the framework to adapt the view, we need to modify the inputManager of the framework to dispatch the touch event to the corresponding windows.

Stack management was changed significantly when Android 4.4 and Android 5.0 were released. The launcher and other applications can run on different stacks. Stacks and stack management functions were added. Below shows the stack differences between Android revisions.

Stack management changes between Android* 4.3 and Android 4.4

Figure 4. Stack management changes between Android* 4.3 and Android 4.4

Let’s focus on Android 5, codenamed Lollipop. As we know, the Android* OS uses a callback method to trigger the activity interface function. But the main function is realized on the framework, so we will introduce two important classes ActivityManagerService and WindowManagerService.

Lollipop’s software structure

Figure 5. Lollipop’s software structure

Lollipop Activity Management

Because the multi-window feature depends on the stack, the following shows how to create a stack and how to start an activity on a stack.

In Lollipop, the IactivityManager.java added the following interface functions.

Table 1.Lollipop source code changes

IactivityManager.java new add interface functionDescription
public void moveTaskToStack(int taskId, int stackId, boolean toTop)Move task to another stack
public void resizeStack(int stackBoxId, Rect bounds)Resize the stack size
public void setFocusedStack(int stackId)Set the current focus stack
Public Boolean isInHomeStack(int taskId)get the task whether is in the HomeStack

After the startup, the SystemServer process will launch the activity management services and window management services. We can add the RuntimeException sentences to trace the process.

Progress of stack creation in Lollipop

Figure 6. Progress of stack creation in Lollipop

Now let’s show how to start an activity on a stack.

Start activity on a stack

Figure 7. Start activity on a stack

In Lollipop, adb (android debug bridge) added the following commands.

Table 3. Lollipop’s new adb commands

ADB COMMANDSFunctionDescription
Adb shell am stack startStart a new activity on <DISPLAY_ID> using IntentKitkat 4.4, adb commands contain adb shell am stack create.
Lollipop 5.0, adb commands adb shell am stack create deletion
Adb shell am stack movetaskMove <TASK_ID> from its current stack to the top or bottom of STACK_IDUsage: adb shell am stack movetask task_id stackid true/false
Notice: it is ok on KitKat, but it doesn’t work on Lollipop.
Adb shell am stack resizeChange <STACK_ID> size and position to <LEFT, TOP, RIGHT, BOTTOM>Usage: adb shell am stack resize task_id weight

Lollipop window management

WindowManagerService is the core window manager. Its function contains input event dispatching, screen layout, and surface management.

WindowManagerService role in the graphics architecture

WindowManagerService role in the graphics architecture 2

Figure 8. WindowManagerService role in the graphics architecture2

Some common questions on multi-window

Multi-window has a resize function. We have seen cases where a gaming animation can’t be resized. The root cause of this is that the Android function SurfaceFlinger can’t dynamically adjust the size.

Games using SurfaceFlinger can’t dynamically adjust the windows size

Figure 9. Games using SurfaceFlinger can’t dynamically adjust the windows size

Another issue is that some applications don’t display correctly in the multi-window. An example below shows the calculator not displaying correctly in the multi-window because the application uses a bad compatibility configuration.

Calculator with bad configuration

Figure 10.Calculator with bad configuration

Will Android’s next version support multi-window?

Will Google release the multi-window feature in its next version of the OS. I found the following log information in the Lollipop source code. We use the following command to search the multi-window log.

git log --grep “multiwindow”

The log content contains a line that reads “defer tap outside stack until multiwindows”. So we conclude that multi-window may be on Google’s roadmap.

Lollipop log about multi-window

Figure 11.Lollipop log about multi-window

Case study: Cornerstone

Onskreen developed Cornerstone, the first multi-window framework solution. It is for large screen devices and tablets. You can download the source code from GitHub3. It is only supported on the Android 4.1 and Android 4.2 versions. It not released yet on higher Android versions. But we can analyze the source code of Android 4.2 to get more technical details.

Cornerstone modifications on Jelly Bean

Figure 12. Cornerstone modifications on Jelly Bean

Summary

Many mobile devices now use Intel® processors running the Android OS. How can developers improve the user experience? How can their products best compete? These questions push us to constantly improve our products on Intel® Architecture (IA) devices. Multi-window is a good differentiating feature. It is convenient and allows consumers to do two or more things simultaneously. They can watch a video and IM friends with video feedback. They can play a game and read reviews of it. Several devices now support the multi-window feature: Ramos i12 tablet, Teclast x98 tablet, and Cube i7, which uses the Remix OS.

Multi-window feature on IA devices

Figure 13.Multi-window feature on IA devices

Reference

[1] http://www.onskreen.com/cornerstone/

[2] http://himmele.googlecode.com/svn/trunk/Google%20Android/Android%20Graphics%20Architecture.pdf

[3] https://github.com/Onskreen/cornerstone

Resource

http://www.customizeblogging.com/2015/04/android-6-0-muffin-concept-video-shows-multi-windows-quick-reply-feats.html

About the Author

Li Liang earned a Master’s degree in signal and information processing from Changchun University of Technology. He joined Intel in 2013 as an application engineer in the Developer Relations Division CCE (Client Computing Enabling). He focuses on helping developers to differentiate their apps on the Android platform.

Android Wear* through ADB

$
0
0

Download PDF

Wearables are one of the latest trends in computing technology. Google’s Android* Wear operating system makes wearables a fertile new area for app development.  

This article gives an overview of the Android Wear operating system focusing on wearable devices, application types, development, and debugging. It also explains two ways of debugging a wearable app using ADB.

Devices

The concept of wearable computers includes different types of devices: wearable headsets, fitness and medical devices, digital jewelry, and even wearables for pets. But nowadays the leading product category is smartwatches. The biggest high tech companies offer their own lineup of wristwatches based on Android Wear. Pebble Steel*, ASUS ZenWatch*, Motorola 360*, LG G Watch R*, Samsung Gear S* are the latest on the market. All of them have different designs, but they share some common functionality supported by Android Wear: Google Now* technology, fitness tracking, controlling music, and voice commands. Also, all smartwatches depend on mobile Android/iOS* devices communicating with them through Bluetooth*. There are special companion apps for smartphones and tablets to connect to wearables.

What to develop?

Although Android Wear is a relatively new project, the Android Wear Center, an analog of Google Play*, provides a wide range of applications specifically designed for wearables.

Android Wear Center

A wide variety of applications are available for smartwatches. Every day Android Wear Center publishes new releases of personalization, music, communication, health, fitness, and other apps. Despite the small size of the smartwatch screen and while not yet abundant, arcade and puzzle games are available too.

Android Wear Center Apps

The vast majority of Wear apps are watchfaces, which customize the essential wristwatch function – showing the time.

Android Wear Apps Watch Faces

How to develop?

On one hand, creating apps for Android Wear is similar to developing for tablets and smartphones. You can use familiar development tools like JDK, Android SDK (Android Wear supports using most of the standard Android APIs), Eclipse*, Android Studio, or other IDEs. Here, you can find the list of the Wearable Support Library classes.

On the other hand, Google provides a vision and design principles unique to wearable app development that cover the essential differences between mobile and wearable technologies. The small screen size and special interaction characters are the differences that your app will have to account for. In addition, you should consider your app structure, context awareness, UI, style, and watch faces.

How to debug?

Debugging is an inherent process of any development life cycle, and developing Android Wear apps is not an exception. This section demonstrates how to debug wearable apps. Two devices are used: LG G Watch R paired with Nexus 4*.

Android Wear supports two ways to debug your device: over USB and over Bluetooth.

Regardless of which method you use to connect wearables to your PC, you need to do following initial steps:

  • Install ADB on your PC.

    Android Debug Bridge (ADB) is a command line tool that provides communication between your PC and Android devices or Android device emulators.

  • Prepare devices for connection.

    You need to enable the USB debugging option not only on your wearable, but on the paired mobile device too. This process is common for all Android devices: go to Settings, tap About, and then tap the build number 7 times to activate Developer Options.

     

    Android Wear Apps USB Debugging

  • Go to Developer options and enable ADB debugging

    Android Wear Apps ADB debugging

Next, if you choose USB debugging you should:

  • Connect the wearable device through the USB cable.

    Android Wear Apps USB Cable

  • Allow Wearable debugging by tapping “ok” on the pop-up window on the paired phone or tablet.

    Allow Wearable Debugging

To ensure ADB connection tap “adb devices” at the command line.

Android Wear Command - adb devices

The Bluetooth case is a little more complicated:

  • Enable Debug over Bluetooth on wearable:

    Android Wear - Debug Over Bluetooth

  • In the Android Wear companion app enable Debug over Bluetooth.

    You can see the status under the option:

    Android Wear - Debug Over Bluetooth Enabled

  • Connect the phone or tablet paired with the wearable to the PC through USB cable and allow USB debugging.

    Android Wearable to PC through USB

  • Tap the following commands:
    adb forward tcp:4444 localabstract:/adb-hubadb connect localhost:4444
  • Allow Wearable debugging:

    Android Allow Wearable Debugging

After this, the status will change to:

Android Wear Status - Debugging over bluetooth

When your connection is successful, a list of devices, like the one shown below, displays:

Android Wear - Connection successful

Now all steps are complete, and you can use ADB commands to debug your app.

How to take screenshots?

ADB is useful for other things in addition to debugging. Taking screenshots on wearables is not as trivial as it seems. The “Take wearable screenshot” option in an Android Wear companion app allows only sharing screenshots through mail or social networks. You can use ADB as another way to save images of wearable screens on your PC.

adb shell screencap -p /sdcard/screenshot.pngadb pull /sdcard/screenshot.png

Notice that even on round dials the screenshots are actually square. You should keep this point in mind to improve your apps’ usability.

Android taking screenshots on wearables

Summary

The combination of the modern technologies like Intel Quark processors and the Android Wear operating system opens up new opportunities for application development. As you can see, Android developers experienced in creating apps for the mobile industry can easily shift to creating apps for wearables like smartwatches, taking care of some nuances.

References

About the Author

Anna Belova works as a Software Engineering Intern in the Software & Services Group at Intel Corporation. She is getting a bachelor degree in Business Informatics at the National Research University Higher School of Economics, Business Informatics and Applied Mathematics Faculty. Anna is interested in mobile technologies and machine-learning.

Viewing all 554 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>