Quantcast
Channel: Intel Developer Zone Articles
Viewing all 554 articles
Browse latest View live

Connecting the Intel® Edison board to your Android* Phone with Serial Port Profile (SPP)

$
0
0

Requirements

  • An Android* phone or tablet running Android 4.3 or higher.

  • Connect your Intel® Edison board to a Wi-Fi* network, see Step 3: Get your Board Online.

  • SCP using a host computer connected to the same network

  • Establish a terminal to your board either Via Serial port or SSH.

Setup

Using SCP, copy this file over to your board.

http://downloadmirror.intel.com/24698/eng/SPP-loopback.py

Navigate the location of SPP-loopback.py and run it in the background.

python SPP-loopback.py &

Install the Bluetooth spp pro app on your Android device.

https://play.google.com/store/apps/details?id=mobi.dzs.android.BLE_SPP_PRO

Type the following in the terminal to your board.

rfkill unblock bluetooth
bluetoothctl

Turn on the Bluetooth on your Android device and make it Discoverable.

(Settings>Bluetooth)

Type in the following in the terminal.

scan on

Find your device and pair to it (replace the MAC address with the MAC address of your device)

pair 78:24:AF:13:58:B9

Select Pair on your device.

Turn on discoverable on your board.

discoverable on

Enable trust to your device.

trust 78:24:AF:13:58:B9

Open Bluetooth spp pro.

Scan for devices.

Then Connect to your board.

It should look like the following screen. 

Try CMD line mode to send messages to the terminal of your board.

Troubleshooting

If you're getting

Failed to pair: org.bluez.Error.AlreadyExists

then check which devices you are paired with

paired-devices

then remove the device you are paired to (replace the MAC address with the MAC address of your device)

remove 78:24:AF:13:58:B9

-----

For other useful commands inside bluetoothctl

help

 


Intel® INDE 2015 Release Notes and Installation Guide

$
0
0

INDE 2015 Update 1.1 is available now!

INDE 2015 Update 1.1 is a label change in License only on top of Update 1.  If you already installed Intel INDE 2015 Update 1, then this installation is optional.

If you are an existing user of INDE 2015 Update 1, you will receive a notification in your Intel Software Manager tool on how to update your installation. If you are a new user, please visit  https://software.intel.com/en-us/intel-inde to see various packages available and download INDE.

Intended Audience

Software developers interested in a cross-platform productivity suite that enables them to quickly and easily create native apps from OS X* host for Android* targets (or) Windows* host for Android* or Windows* targets

What is new in Update 1 release of INDE 2015?

  • Support for Android* Lollipop 32 bit/64 bit apps
  • Support for Nexus Player
  • Visual Studio 2013 Community Edition is supported

For details, please refer to the Release Notes for Windows* and OS X* hosts below.

Customer Support

For technical support of Intel® Integrated Native Developer Experience 2015 (Intel® INDE) Update 1, including answers to questions not addressed in this product, latest online getting started help, visit the technical support forum, FAQs, and other support information at: https://software.intel.com/en-us/intel-inde-support

 To seek help with an issue in Intel® INDE (any edition), go to the user forum (https://software.intel.com/en-us/forums/intel-integrated-native-developer-experience-intel-inde)

To submit an issue for Ultimate or Professional Edition of Intel® INDE, go to Intel® Premier Support: (https://premier.intel.com/)

Intel® Premier Support is not available for Starter Edition of the product.

For more information on registering to Intel Premier Support, go to: http://software.intel.com/en-us/articles/performance-tools-for-software-developers-intel-premier-support

Release Notes and Installation Guide for Windows* host

Release Notes and Installation Guide for OS X* host

Application Development Using NexStreaming NexPlayer* SDK

$
0
0

Download Document

Introduction

NexStreaming is a global mobile software company with headquarters in Seoul, Korea and branches in Spain, United States, Japan, and China. Its most popular product, NexPlayer* SDK, is a player SDK used by some of the most famous video service providers integrated into mobile applications. The player is compatible with all the leading DRM technologies in the industry. Moreover, it can be combined with other complementing technologies such as advertisement insertion, audience measurements, or audio enhancements. NexStreaming NexPlayer SDK provides audio and video decoding and playback services. Application developers can use the SDK to build custom efficient multimedia players quickly. NexPlayer SDK is reliable and robust and has proven to be compatible with international standards. This paper describes how to create an x86 player app using the NexPlayer SDK.

Platform Compatibility

The NexPlayer SDK is optimized for x86, so it is fully supported by x86 devices. NexPlayer SDK supports the following:

  • Android* 1.6 or above
  • mp4, 3gp, avi, asf, piff video file formats
  • HTTP Live Streaming version 5.0, 3GPP Progressive Download, AES128, HTTPS protocols, h.264, AAC, AAC+, eAAC+ codecs. Both software and hardware codecs are supported by the SDK.
  • .smi, .srt, .sub, 3GPP timed text, TTML closed captions (PIFF/CFF only), CEA 608 and CEA 708 closed captions, and Web Video Text Tracks (WebVTT)

How to create an x86 player app using NexPlayer SDK

You need to request the SDK from NexStreaming http://www.nexstreaming.com/downloads-sdk. Once you downloaded the SDK and the demo app, the optimization for the Intel® chipsets will be automatically included. Full documentation is included with the SDK. Refer to the SDK documentation and samples to select the APIs appropriate for your application. App development using the SDK is very easy and straightforward. You can use sample code as a guide. It takes about one hour to develop a complete app with NexPlayer SDK.

To integrate NexPlayer SDK into an x86 Android app and ensure the best experience with the NexPlayer SDK on Intel® devices, you need to perform the following simple steps.

  • Copy the libs contained in the SDK/libs/ folder into the assets/x86 folder of your project.
  • Copy the libs contained in the SDK/libs folder into the libs/x86 folder of your project.
  • Copy the source files contained in the SDK/src folder into the src/com/nexstreaming/nexplayerengine folder of your project.

The NexPlayer SDK will detect that change and use those libraries to take advantage of the Intel resources. Once all libraries are in these directories, switch between ARM and x86 version of an app is automatic and handled by the SDK. If you want to integrate a newer version of NexPlayer SDK, the only thing you need to do is to overwrite the library files above.

NexPlayer SDK includes a large number of libraries including DRM libraries. These libraries are in the app/assets/x86 directory. Required libraries consist of engine, decoders, and the rendering layer:

  • libnexplayerengine.so
  • libnexalfactory.so
  • libnexadaptation_layer_for_dlsdk.so
  • libnexralbody_audio.so
  • libnexralbody_video_opengl.so
  • libnexral_nw_ics.so
  • libnexral_nw_jb.so
  • libnexcal_oc_ics.so
  • libnexcal_oc_jb.so
  • libnexcralbody_mc_jb.so
  • libnexcal_in_aac_x86.so
  • libnexcal_in_mp3_x86.so
  • libnexcal_in_amr_x86.so

Some library names contain abbreviations “ics” for Ice Cream Sandwich and “jb” for Jelly Bean. If your app only supports certain version(s) of Android, you can remove the libraries for the unsupported version(s) of the OS.

The libraries that provide support for codecs are:

  • libnexcal_h364_x86.so - video lib for H.264
  • libnexcal_aac_x86.so - audio lib for AAC, AAC-Plus, and HE-AAC
  • libnexcal_mp3_x86.so - audio lib for MP2 and MP3

The following libraries support text captions:

  • libnexcal_3gpp_x86.so - for 3GPP timed text captions
  • libnexcal_closedcaption_x86.so - for CEA 608 and CEA 708 closed captions
  • libnexcal_ttml_x86.so - for TTML (CFF) timed text captions
  • libnexcal_webvtt_x86.so - for WebVTT text tracks

To reduce the size of the app, include only the libraries that you need for your app.

The libraries from app/libs/x86 need to be loaded in InitManager() in Java* source code for the application file. For example, for “NexHDSample” add the corresponding x86 library into app/src/NexHDManager.java in initManager():

System.loadLibrary("NexHTTPDownloaderSample_jni");

How to display your x86 app’s video on the screen using the SDK

There are two ways to display a video using NexPLayer SDK: NexVideoRenderer and the OpenGL* renderer. NexVideoRenderer is the recommended way to display video. NexVideoRenderer abstracts the complexities of surface handling and video rendering tasks by choosing the most appropriate renderer based on the device and version of the OS. To implement NexVideoRenderer, do the following:

  1. Pass in a context (android.content.Context) to the constructor.
  2. Set up listeners (NexPlayer.IListener and NexPlayer.IVideoRendererListener).
  3. Create an instance of NexPlayer.
  4. Perform the necessary setup for NexPlayer (NexPlayer.setNexALFactory and NexPlayer.init).
  5. Call init with the NexPlayer instance (NexVideoRenderer.init).
  6. Add the NexVideoRenderer instance as a view to your layout.

A complete example of this renderer is in NexPlayerSample/src/com.nexstreaming.nexplayerengine/NexVideoRenderer.java.

Live Streaming

HTTP Live Streaming supports multiple audio and video streams. setMediaStream() API allows these streams to be selected from the user interface while content is being played. This is supported by the SDK and is mentioned in this paper just for your information. Three possible use cases are available:

  1. A variant playlist with alternative audio. In this case video and audio can be selected independently.
  2. A variant playlist with alternative video. Here each track contains both audio and video, but alternative video streams are available (for example, different camera angles or views of the same content).
  3. A combination of a variant playlist with alternative video and audio. This use case is a combination of the above cases, where a main video stream provides video tracks at different bitrates but INCLUDES the same audio, and separate audio tracks are available for optional language selection.

PnP Analysis of x86 NexPlayerDemoApp

In the following analysis I evaluated launch and idle as well as playback of local mp4 files and sample streams of a sample app (“NexPlayerDemoApp”) developed using NexPLayer SDK. The analysis was performed with the VTune™ analyzer for Android and Intel® SoC Watch on an Intel® Atom™ processor-based tablet Z3740 @1.6 GHz with Intel® HD Graphics (Gen 7) and Wi-Fi* connected, on Android 4.4.2 (KitKat).

Intel® Atom™ processor-based tablet Z3740Baseline


Table 1.

NexPlayerDemoApp at Idle


Table 2.

High C0 times on the Intel® Atom™ processor-based tablet Z3740 during the application idle will result in increased power consumption, but this is not the case with NexPlayerDemoApp. The numbers are very close to Z3740 baseline.

NexPlayerDemoApp at Launch


Table 3.

The numbers are very close to Z3740 baseline.

Video Playback

During video playback, the x86 version of NexPlayerDemoApp utilizes an average of 33% of the CPU. The activity does not show abnormalities and is consistent with the core total.


Figure 1. Video Playback Time View for x86 NexPlayerDemoApp.

Live Streaming

During live streaming, the x86 version of NexPlayerDemoApp utilizes an average of 25% of the CPU. The activity does not show abnormalities and is consistent with the core total.


Figure 2. Live Streaming Time View for x86 NexPlayerDemoApp.

Battery Consumption

x86 NexPlayerDemoApp utilizes ~3.8W of power during the combination of live streaming (taking the majority of time), video playback, and a very short idle in x86 version.

Conclusion

Developing apps with the NexPlayer SDK is fast and easy and proves to be efficient with low power utilization on x86 mobile devices.

About the Author

Lana Lindberg is a member of the Intel® Software and Solutions Group (SSG), Developer Relations Division, Intel® Atom™ Processor High Touch Software Enabling team. Before joining SSG, Lana was OpenGL ES test developer in graphics software validation team for the Ultra Mobility Group.

References and Helpful Links

  1. NexStreaming NexPlayer Software-Hardware SDK for Android Version 6.28 Technical Reference Manual. NexStreaming Corporation, January 8, 2015.
  2. NexStreaming website www.nexstreaming.com

Using the Intel® SSSE3 Instruction Set to Accelerate DNN Algorithm in Local Speech Recognition

$
0
0

Download PDF[PDF 888.45KB]

Overview

Over the past thirty years, speech recognition technology has made significant progress, starting in the lab to the market. Speech recognition technology is becoming important in people lives, and is found in our jobs, houses, automotive, medical and other fields. It’s is one of the TOP 10 merging technologies in the world.

As a result of this year’s’ developments, the main algorithm of speech recognition technology has changed from GMM (Gaussian Mixture Model) and, HMM-GMM (Hidden Markov Model-Gaussian Mixture Model) to DNN (Deep Neural Network). DNN functions similar to the way a human’s brain works, it is a very complicated, heavy calculation, and huge data based model. Thanks to the internet, we just only need a smartphone and don’t care about the huge number of servers in the remote computer room that make it all happen. Without internet, the speech recognition service in your mobile devices nearly useless, very few times it can listen to what you said and work.

Is it possible to move the DNN calculation process from server to the mobile end device? Phones? Tablets? The answer is YES.

With support for the SSSE3 instruction set on Intel’s CPU, we could easy run a DNN based speech recognition application without the internet. The accuracy is over 80% by our test, that’s very close to the result of online mode tests. Adding direct SSSE3 support creates a good user experience on mobile devices. In this article I will explain what is DNN and how the Intel® SSSE3 instruction set helps to accelerate DNN calculation progress.

Introduction

DNN is the abbreviation for Deep Neural Network, which contains a many hidden layer feed forward network. DNN is a hot spot in the field of machine learning in recent years, producing a wide range of applications. DNN has a deep structure, with tens of millions of parameters needed to be learned, and the lead time for training is very time consuming.

Speech recognition is a typical application case of DNN. To put it simply, speech recognition applications consists of an acoustical model, language model and a decoding process. The acoustical model is used to simulate the probability distribution of pronunciation. The language model is used to simulate the relationship between words. And the decoding process stage uses the above two models to translate the sound into text. A neural network has the ability to simulate any word distribution. Where a deep neural network has a stronger expression ability than a shallow neural network, it simulates the deep structure of the brain and can “understand” more accurately the characteristics of things. So compared with other methods, the depth of the neural network can be a more accurately simulated acoustic and language model.

Figure 1. DNN Application Field

Typical DNN Chart

A typical DNN generally contains multiple alternate superposition of a linear and non-linear layer, as it shows below:

Figure 2. Including 4 hidden layer DNN acoustic model

In Figure 2, the linear layer is a fully connected relationship, and the input to output could be described by this formula:

YT = XTWT + B

XT is the row vector, and input is by neural network. In a speech recognition application, we generally put 4 frames of data to calculate together, so that we create a 4xM input matrix. WT and B is the linear transformation matrix of the neural network and offset vector, usually the dimension is huge and square.

Intel® SSSE3 Instruction Set

Supplemental Streaming SIMD Extensions 3, or SSSE3 for short, is named by Intel and as the extension of SSSE3 instruction set. The SSSE3 instruction set is a part of SIMD technology, which has been integrated into Intel’s CPU and helps to improve the ability of multimedia processing, coding/decoding, and calculations. Using the SSSE3 instruction set, we can process multiple data inputs by a single instruction in a clock cycle, and then greatly improve the program’s efficiency. It works particularly for matrix calculations.

To use the SSSE3 instruction set, we should first declare and include the SIMD header files:


#include  //MMX
#include  //SSE(include mmintrin.h)
#include  //SSE2(include xmmintrin.h)
#include  //SSE3(include emmintrin.h)
#include  //SSSE3(include pmmintrin.h)
#include  //SSE4.1(include tmmintrin.h)
#include  //SSE4.2(include smmintrin.h)
#include  //AES(include nmmintrin.h)
#include  //AVX(include wmmintrin.h)
#include  //(include immintrin.h)

The header file “tmmintrin.h” is for SSSE3, and the functions defined in this file are below:


/*Add horizonally packed [saturated] words, double words,
{X,}MM2/m{128,64} (b) to {X,}MM1 (a).*/
//a=(a0, a1, a2, a3, a4, a5, a6, a7), b=(b0, b1, b2, b3, b4, b5, b6, b7)
//then r0=a0+a1,r1=a2+a3,r2=a4+a5,r3=a6+a7,r4=b0+b1,r5=b2+b3,r6=b4+b5, r7=b6+b7
extern __m128i _mm_hadd_epi16 (__m128i a, __m128i b);
//a=(a0, a1, a2, a3), b=(b0, b1, b2, b3)
//then r0=a0+a1,r1=a2+a3,r2=b0+b1,r3=b2+b3
extern __m128i _mm_hadd_epi32 (__m128i a, __m128i b);
//SATURATE_16(x) is ((x > 32767) ? 32767 : ((x < -32768) ? -32768 : x))
//a=(a0, a1, a2, a3, a4, a5, a6, a7), b=(b0, b1, b2, b3, b4, b5, b6, b7)
//then r0=SATURATE_16(a0+a1), ..., r3=SATURATE_16(a6+a7),
//r4=SATURATE_16(b0+b1), ..., r7=SATURATE_16(b6+b7)
extern __m128i _mm_hadds_epi16 (__m128i a, __m128i b);
//a=(a0, a1, a2, a3), b=(b0, b1, b2, b3)
//then r0=a0+a1, r1=a2+a3, r2=b0+b1, r3=b2+b3
extern __m64 _mm_hadd_pi16 (__m64 a, __m64 b);
//a=(a0, a1), b=(b0, b1), 则r0=a0+a1, r1=b0+b1
extern __m64 _mm_hadd_pi32 (__m64 a, __m64 b);
//SATURATE_16(x) is ((x > 32767) ? 32767 : ((x < -32768) ? -32768 : x))
//a=(a0, a1, a2, a3), b=(b0, b1, b2, b3)
//then r0=SATURATE_16(a0+a1), r1=SATURATE_16(a2+a3),
//r2=SATURATE_16(b0+b1), r3=SATURATE_16(b2+b3)
extern __m64 _mm_hadds_pi16 (__m64 a, __m64 b);

/*Subtract horizonally packed [saturated] words, double words,
{X,}MM2/m{128,64} (b) from {X,}MM1 (a).*/
//a=(a0, a1, a2, a3, a4, a5, a6, a7), b=(b0, b1, b2, b3, b4, b5, b6, b7)
//then r0=a0-a1, r1=a2-a3, r2=a4-a5, r3=a6-a7, r4=b0-b1, r5=b2-b3, r6=b4-b5, r7=b6-b7
extern __m128i _mm_hsub_epi16 (__m128i a, __m128i b);
//a=(a0, a1, a2, a3), b=(b0, b1, b2, b3)
//then r0=a0-a1, r1=a2-a3, r2=b0-b1, r3=b2-b3
extern __m128i _mm_hsub_epi32 (__m128i a, __m128i b);
//SATURATE_16(x) is ((x > 32767) ? 32767 : ((x < -32768) ? -32768 : x))
//a=(a0, a1, a2, a3, a4, a5, a6, a7), b=(b0, b1, b2, b3, b4, b5, b6, b7)
//then r0=SATURATE_16(a0-a1), ..., r3=SATURATE_16(a6-a7),
//r4=SATURATE_16(b0-b1), ..., r7=SATURATE_16(b6-b7)
extern __m128i _mm_hsubs_epi16 (__m128i a, __m128i b);
//a=(a0, a1, a2, a3), b=(b0, b1, b2, b3)
//then r0=a0-a1, r1=a2-a3, r2=b0-b1, r3=b2-b3
extern __m64 _mm_hsub_pi16 (__m64 a, __m64 b);
//a=(a0, a1), b=(b0, b1), 则r0=a0-a1, r1=b0-b1
extern __m64 _mm_hsub_pi32 (__m64 a, __m64 b);
//SATURATE_16(x) is ((x > 32767) ? 32767 : ((x < -32768) ? -32768 : x))
//a=(a0, a1, a2, a3), b=(b0, b1, b2, b3)
//then r0=SATURATE_16(a0-a1), r1=SATURATE_16(a2-a3),
//r2=SATURATE_16(b0-b1), r3=SATURATE_16(b2-b3)
extern __m64 _mm_hsubs_pi16 (__m64 a, __m64 b);

/*Multiply and add packed words,
{X,}MM2/m{128,64} (b) to {X,}MM1 (a).*/
//SATURATE_16(x) is ((x > 32767) ? 32767 : ((x < -32768) ? -32768 : x))
//a=(a0, a1, a2, ..., a13, a14, a15), b=(b0, b1, b2, ..., b13, b14, b15)
//then r0=SATURATE_16((a0*b0)+(a1*b1)), ..., r7=SATURATE_16((a14*b14)+(a15*b15))
//Parameter a contains unsigned bytes. Parameter b contains signed bytes.
extern __m128i _mm_maddubs_epi16 (__m128i a, __m128i b);
//SATURATE_16(x) is ((x > 32767) ? 32767 : ((x < -32768) ? -32768 : x))
//a=(a0, a1, a2, a3, a4, a5, a6, a7), b=(b0, b1, b2, b3, b4, b5, b6, b7)
//then r0=SATURATE_16((a0*b0)+(a1*b1)), ..., r3=SATURATE_16((a6*b6)+(a7*b7))
//Parameter a contains unsigned bytes. Parameter b contains signed bytes.
extern __m64 _mm_maddubs_pi16 (__m64 a, __m64 b);

/*Packed multiply high integers with round and scaling,
{X,}MM2/m{128,64} (b) to {X,}MM1 (a).*/
//a=(a0, a1, a2, a3, a4, a5, a6, a7), b=(b0, b1, b2, b3, b4, b5, b6, b7)
//then r0=INT16(((a0*b0)+0x4000) >> 15), ..., r7=INT16(((a7*b7)+0x4000) >> 15)
extern __m128i _mm_mulhrs_epi16 (__m128i a, __m128i b);
//a=(a0, a1, a2, a3), b=(b0, b1, b2, b3)
//then r0=INT16(((a0*b0)+0x4000) >> 15), ..., r3=INT16(((a3*b3)+0x4000) >> 15)
extern __m64 _mm_mulhrs_pi16 (__m64 a, __m64 b);

/*Packed shuffle bytes
{X,}MM2/m{128,64} (b) by {X,}MM1 (a).*/
//SELECT(a, n) extracts the nth 8-bit parameter from a. The 0th 8-bit parameter
//is the least significant 8-bits, b=(b0, b1, b2, ..., b13, b14, b15), b is mask
//then r0 = (b0 & 0x80) ? 0 : SELECT(a, b0 & 0x0f), ...,
//r15 = (b15 & 0x80) ? 0 : SELECT(a, b15 & 0x0f)
extern __m128i _mm_shuffle_epi8 (__m128i a, __m128i b);
//SELECT(a, n) extracts the nth 8-bit parameter from a. The 0th 8-bit parameter
//is the least significant 8-bits, b=(b0, b1, ..., b7), b is mask
//then r0= (b0 & 0x80) ? 0 : SELECT(a, b0 & 0x07),...,
//r7=(b7 & 0x80) ? 0 : SELECT(a, b7 & 0x07)
extern __m64 _mm_shuffle_pi8 (__m64 a, __m64 b);

/*Packed byte, word, double word sign, {X,}MM2/m{128,64} (b) to {X,}MM1 (a).*/
//a=(a0, a1, a2, ..., a13, a14, a15), b=(b0, b1, b2, ..., b13, b14, b15)
//then r0=(b0 < 0) ? -a0 : ((b0 == 0) ? 0 : a0), ...,
//r15= (b15 < 0) ? -a15 : ((b15 == 0) ? 0 : a15)
extern __m128i _mm_sign_epi8 (__m128i a, __m128i b);
//a=(a0, a1, a2, a3, a4, a5, a6, a7), b=(b0, b1, b2, b3, b4, b5, b6, b7)
//r0=(b0 < 0) ? -a0 : ((b0 == 0) ? 0 : a0), ...,
//r7= (b7 < 0) ? -a7 : ((b7 == 0) ? 0 : a7)
extern __m128i _mm_sign_epi16 (__m128i a, __m128i b);
//a=(a0, a1, a2, a3), b=(b0, b1, b2, b3)
//then r0=(b0 < 0) ? -a0 : ((b0 == 0) ? 0 : a0), ...,
//r3= (b3 < 0) ? -a3 : ((b3 == 0) ? 0 : a3)
extern __m128i _mm_sign_epi32 (__m128i a, __m128i b);
//a=(a0, a1, a2, a3, a4, a5, a6, a7), b=(b0, b1, b2, b3, b4, b5, b6, b7)
//then r0=(b0 < 0) ? -a0 : ((b0 == 0) ? 0 : a0), ...,
//r7= (b7 < 0) ? -a7 : ((b7 == 0) ? 0 : a7)
	extern __m64 _mm_sign_pi8 (__m64 a, __m64 b);
	//a=(a0, a1, a2, a3), b=(b0, b1, b2, b3)
	//则r0=(b0 < 0) ? -a0 : ((b0 == 0) ? 0 : a0), ...,
	//r3= (b3 < 0) ? -a3 : ((b3 == 0) ? 0 : a3)
	extern __m64 _mm_sign_pi16 (__m64 a, __m64 b);
	//a=(a0, a1), b=(b0, b1), 则r0=(b0 < 0) ? -a0 : ((b0 == 0) ? 0 : a0),
	//r1= (b1 < 0) ? -a1 : ((b1 == 0) ? 0 : a1)
	extern __m64 _mm_sign_pi32 (__m64 a, __m64 b);

	/*Packed align and shift right by n*8 bits,
	{X,}MM2/m{128,64} (b) to {X,}MM1 (a).*/
	//n: A constant that specifies how many bytes the interim result will be
	//shifted to the right, If n > 32, the result value is zero
	//CONCAT(a, b) is the 256-bit unsigned intermediate value that is a
	//concatenation of parameters a and b.
	//The result is this intermediate value shifted right by n bytes.
	//then r= (CONCAT(a, b) >> (n * 8)) & 0xffffffffffffffff
	extern __m128i _mm_alignr_epi8 (__m128i a, __m128i b, int n);
	//n: An integer constant that specifies how many bytes to shift the interim
	//result to the right,If n > 16, the result value is zero
	//CONCAT(a, b) is the 128-bit unsigned intermediate value that is formed by
	//concatenating parameters a and b.
	//The result value is the rightmost 64 bits after shifting this intermediate
	//result right by n bytes
	//then r = (CONCAT(a, b) >> (n * 8)) & 0xffffffff
	extern __m64 _mm_alignr_pi8 (__m64 a, __m64 b, int n);

	/*Packed byte, word, double word absolute value,
	{X,}MM2/m{128,64} (b) to {X,}MM1 (a).*/
	//a=(a0, a1, a2, ..., a13, a14, a15)
	//then r0 = (a0 < 0) ? -a0 : a0, ..., r15 = (a15 < 0) ? -a15 : a15
	extern __m128i _mm_abs_epi8 (__m128i a);
	//a=(a0, a1, a2, a3, a4, a5, a6, a7)
	//then r0 = (a0 < 0) ? -a0 : a0, ..., r7 = (a7 < 0) ? -a7 : a7
	extern __m128i _mm_abs_epi16 (__m128i a);
	//a=(a0, a1, a2, a3)
	//then r0 = (a0 < 0) ? -a0 : a0, ..., r3 = (a3 < 0) ? -a3 : a3
	extern __m128i _mm_abs_epi32 (__m128i a);
	//a=(a0, a1, a2, a3, a4, a5, a6, a7)
	//then r0 = (a0 < 0) ? -a0 : a0, ..., r7 = (a7 < 0) ? -a7 : a7
	extern __m64 _mm_abs_pi8 (__m64 a);
	//a=(a0, a1, a2, a3)
	//then r0 = (a0 < 0) ? -a0 : a0, ..., r3 = (a3 < 0) ? -a3 : a3
	extern __m64 _mm_abs_pi16 (__m64 a);
	//a=(a0, a1), then r0 = (a0 < 0) ? -a0 : a0, r1 = (a1 < 0) ? -a1 : a1
	extern __m64 _mm_abs_pi32 (__m64 a);

The data structure definition of __m64 and __m128 are in MMX’s header file “mmintrin.h” and SSE header file “xmmintrin.h”.

__m64:


typedef union __declspec(intrin_type) _CRT_ALIGN(8) __m64
{
	unsigned __int64    m64_u64;
	float               m64_f32[2];
	__int8              m64_i8[8];
	__int16             m64_i16[4];
	__int32             m64_i32[2];
	__int64             m64_i64;
	unsigned __int8     m64_u8[8];
	unsigned __int16    m64_u16[4];
	unsigned __int32    m64_u32[2];
} __m64;

__m128:


typedef union __declspec(intrin_type) _CRT_ALIGN(16) __m128 {
	float               m128_f32[4];
	unsigned __int64    m128_u64[2];
	__int8              m128_i8[16];
	__int16             m128_i16[8];
	__int32             m128_i32[4];
	__int64             m128_i64[2];
	unsigned __int8     m128_u8[16];
	unsigned __int16    m128_u16[8];
	unsigned __int32    m128_u32[4];
} __m128;


Case study: using SSSE3 functions to accelerate DNN calculation

In this section, we take two functions as a sample to describe how SSSE3 is used to accelerate the DNN calculation process.

__m128i _mm_maddubs_epi16 (__m128i a, __m128i b) Saturated Accumulation Operation

This function is very critical for the matrix calculation in DNN, the parameter a is a 128bit register, used to store 16 unsigned integers which are 8bit, and parameter b is 16 signed integer which also is 8bit; the return result which included 8 signed 16bit integer. This function is perfect for meeting the requirement of matrix calculation. Such as:


	r0 := SATURATE_16((a0*b0) + (a1*b1))
	r1 := SATURATE_16((a2*b2) + (a3*b3))
	…
	r7 := SATURATE_16((a14*b14) + (a15*b15))

__m128i _mm_hadd_epi32 (__m128i a, __m128i b) Adjacent Elements Add Operation

This function can be called pair-wise add. The parameters a and b both are 128bit registers which store a 4 signed integer of 32bit. According to normal corresponding element add operation in two vector, it does the add operation with adjacent elements with input vector. Such as:


	r0 := a0 + a1
	r1 := a2 + a3
	r2 := b0 + b1
	r3 := b2 + b3

Then, we suppose there’s a task of vector calculation in DNN process:

Q: There are five vectors a1, b1, b2, b3, b4. The a1 vector is 16 dimension unsigned-char integer, b1, b2, b3, b4 are both 16 dimension signed-char integers. We need the inner product of a1*b1, a1*b2, a1*b3, a1*b4, and to store the result in a signed int of 32bit.

If we used normal design and C program language to implement it, the coding would be as follows:


unsigned char b1[16],b2[16],b3[16],b4[16];
signed char a1[16];
int c[4],i;
//
Initialize b1,b2,b3,b4 and a1, for c, initialize with zeros
//
for(i=0;i<16;i++){
c[0] += (short)a1[i]*(short)b1[i];
c[1] += (short)a1[i]*(short)b1[i];
c[2] += (short)a1[i]*(short)b1[i];
c[3] += (short)a1[i]*(short)b1[i];
}

Suppose there is one multiplication and addition per clock cycle, this code fills 64 clock cycles.

Then we used the SSSE3 instruction set to implement it instead:


register __m128i a1,b1,b2,b3,b4,c,d1,d2,d3,d4;
// initialize a1 b1 b2 b3 b4 c here, where c is set to zeros//
d1 = _mm_maddubs_epi16(a1,b1);
d1 = _mm_add_epi32(_mm_srai_epi32(_mm_unpacklo_epi16(d1, d1), 16), _mm_srai_epi32(_mm_unpackhi_epi16(d1, d1), 16));
d2 = _mm_maddubs_epi16(a1,b2);
d2 = _mm_add_epi32(_mm_srai_epi32(_mm_unpacklo_epi16(d2, d2), 16), _mm_srai_epi32(_mm_unpackhi_epi16(d2, d2), 16));
d3 = _mm_hadd_epi32(d1, d2);
d1 = _mm_maddubs_epi16(a1,b3);
d1 = _mm_add_epi32(_mm_srai_epi32(_mm_unpacklo_epi16(d1, d1), 16), _mm_srai_epi32(_mm_unpackhi_epi16(d1, d1), 16));
d2 = _mm_maddubs_epi16(a1,b4);
d2 = _mm_add_epi32(_mm_srai_epi32(_mm_unpacklo_epi16(d2, d2), 16), _mm_srai_epi32(_mm_unpackhi_epi16(d2, d2), 16));
d4 = _mm_hadd_epi32(d1, d2);
c = _mm_hadd_epi32(d3, d4);

We stored the result in a 128bit register of “c”, where it is jointed by 4 integers. Take in consideration of the pipeline, this process may cost 12 or 13 clock cycles. So, the potential results we could get from this task are:

ImplementationCPU Clock CyclesPromotion
Normal C Coding64-
Using SSSE3 Instruction Set13~ 500%

As we know, there are many matrix calculations in the DNN process of speech recognition, if we optimize each one in our code like this, it will achieve better performance on the IA platform than ever. We have cooperated with ISV Unisound, which provides speech recognition services in China. Unisound used the DNN process with an improvement in performance of over 10% on ARM devices.

Summary

DNN is becoming the main algorithm in speech recognition. It has been selected by Google Now, Baidu Voice, Tencent Wechat, iFlytek Speech Service, Unisound Speech Service, and many others. At the same time, we have the SSSE3 instruction set which could help to optimize the speech recognition process, if all of these applications begin using it, I believe the speech service will give us a better experience and more increased usage of IA platform.

About the Author

Li Alven graduated from Huazhong University of Science and Technology, where he majored in Computer Science and Information Security at 2007. He joined Intel in 2013 as a senior application engineer in the Developer Relations Division Mobile Enabling Team. He is focused on differentiation and innovative enabling on the IA platform, Speech Recognition Technology, tuning performance, etc.

Using Gesture Recognition as Differentiation Feature on Android*

$
0
0

Download Document

Overview

Sensors found in mobile devices typically include accelerometer, gyroscope, magnetometer, pressure, and ambient light sensor. Users generate motion events when they move, shake, or tilt the device. We can use a sensor’s raw data to realize motion recognition. For example, you can mute your phone by flipping your phone when a call is coming or you can launch your camera application when you lift your device. Using sensors to create convenient features helps to promote a better user experience.

Intel® Context Sensing SDK for Android* v1.6.7 has released several new context types, like device position, ear touch, flick gesture, and glyph gesture. In the paper, we will introduce how to get useful information from sensor data, and then we will use an Intel Context Sensing SDK example to demonstrate flick detect, shake detect, glyph detect.

Introduction

A common question is how to connect sensors to the application processor (AP) from the hardware layer. Figure 1 shows three ways for sensors to be connected to the AP: direct attach, discrete sensor hub, and ISH (integrated sensor hub).


Figure 1. Comparing different sensor solutions

When sensors are connected to the AP, it is a direct attach. The problem, however, is direct attach consumes AP power to detect data changes. The next evolution is a discrete sensor hub. It can overcome power consumption problems, and the sensor can work in an always-on method. Even if the AP enters the S3[1] status, a sensor hub can use an interrupt signal to wake up the AP. The next evolution is an integrated sensor. Here, the AP contains a sensor hub, which holds costs down for the whole device BOM.

A sensor hub is an MCU (Multipoint Control Unit), and you can compile your algorithm using available languages (C/C++ language), then download the binary to the MCU. In 2015, Intel will release CherryTrail-T platform for tablets, SkyLake platform for 2in1 devices, both employing sensor hubs. See [2] for more information about the use of integrated sensor hubs.

Figure 2, illustrating the sensor coordinate system, shows the accelerometer measures velocity along the x, y, z axis, and the gyroscope measures rotation around the x, y, z axis.


Figure 2. Accelerometer and gyroscope sensor coordinate system


Figure 3. Acceleration values on each axis for different positions[3]

Table 1 shows new gestures included in the Android Lollipop release.

Table 1: Android* Lollipop’s new gestures

NameDescription
SENSOR_STRING_TYPE_PICK_UP_GESTURETriggers when the device is picked up regardless of whatever was before (desk, pocket, bag).
SENSOR_STRING_TYPE_GLANCE_GESTUREEnables briefly turning on the screen to allow the user to glance at content on screen based on a specific motion.
SENSOR_STRING_TYPE_WAKE_GESTUREEnables waking up the device based on a device specific motion.
SENSOR_STRING_TYPE_TILT_DETECTORGenerates an event each time a tilt event is detected.

These gestures are defined in the Android Lollipop source code directory /hardware/libhardware/include/hardware/sensor.h.

Gesture recognition process

The gesture recognition process contains preprocessing, feature extraction, and a template matching stage. Figure 4 shows the process.


Figure 4. Gesture recognition process

In the following content, we will analyze the process.

Preprocessing

After getting the raw data, data preprocessing is started. Figure 5 shows a gyroscope data graph when a device is right flicked once. Figure 6 shows an accelerometer data graph when a device is right flicked once.


Figure 5. Gyroscope sensor data change graph (RIGHT FLICK ONCE)


Figure 6. Accelerometer sensor data change graph (RIGHT FLICK ONCE)

We can write a program to send sensor data by network interface using Android devices, and then write a Python* script that will be run on a PC. So we can dynamically get the sensor graphs from the devices.

This step contains the following items:

  • A pc running a Python script to receive sensor data.
  • A DUT-run application to collect sensor data, and then send this data to the network.
  • An Android adb command to configure the receive and send port (adb forward tcp: port tcp: port).


Figure 7. How to dynamically show sensor data graphs

In this stage we will remove singularity and as is common we use a filter to cancel noise. The graph in Figure 8 shows that the device is turned 90, and then turned back to the initial position.


Figure 8. Remove drift and noise singularity[4]

Feature extraction

A sensor may contain some signal noise that can affect the recognition results. For example, FAR (False Acceptance Rate) and FRR (False Rejection Rates) show rates of recognition rejection. By using different sensors data fusion we can get more accurate recognition results. Sensor fusion[5] has been applied in many mobile devices. Figure 9 shows an example of using the accelerometer, magnetometer, and gyroscope sensor to get device orientation. Commonly, feature extraction uses FFT and zero-crossing methods to get feature values. The accelerometer and magnetometer are very easily interfered with by EMI. We usually need to calibrate these sensors.


Figure 9. Get device orientations using sensor fusion [4]

Features contain max/min value, peak and valley, we can extract these data to enter the next step.

Template Matching

By simply analyzing the graph of the accelerometer sensor, we find that:

  • A typical left flick gesture contains two valleys and one peak
  • A typical left flick twice gesture contains three valleys and two peaks

This implies that we can design very simple state machine-based flick gesture recognition. Compared to the HMM[6] model based gesture recognition, it is more robust and has higher algorithm precision.


Figure 10.Left flick one/twice accelerometer and gyroscope graph

Case study: Intel® Context Sensing SDK

Intel Context Sensing SDK[7] uses sensor data as a provider to transfer sensor data to context sensing services. Figure 11 shows detailed architecture information.


Figure 11. Intel® Context Sensing SDK and Legacy Android* architecture

Currently the SDK supports glyph, flick, and ear_touch gesture recognition. You can get more information from the latest release notes[8]. Refer to the documentation to learn how to develop applications. The following is that device running the ContextSensingApiFlowSample sample application.


Figure 12. Intel® Context SDK support for the flick gesture [7]

Intel® Context Sensing SDK support flick direction is accelerometer sensor x axis and z axis direction, not support z axis flick.


Figure 13. Intel® Context SDK support for an ear-touch gesture [7]


Figure 14. Intel® Context SDK support for the glyph gesture [7]


Figure 15. Intel® Context SDK sample application (ContextSensingApiFlowSample)

Summary

Sensors are widely applied to modern computing devices with motion recognition in mobile devices as a significant differentiation feature to attract users. Sensor usage is a very important feature to promote user experience in mobile devices. The currently released Intel Context Sensing SDK v1.6.7 accelerates the simple usage of sensors that all users are seeking.

About the Author

Li Liang earned a Master’s degree in signal and information processing from Changchun University of Technology. He joined Intel in 2013 as an application engineer working on client computing enabling. He focuses on differentiation enabling on the Android platform, for example, multi-windows, etc.

Reference

[1] http://en.wikipedia.org/wiki/Advanced_Configuration_and_Power_Interface

[2] http://ishfdk.iil.intel.com/download

[3] http://cache.freescale.com/files/sensors/doc/app_note/AN4317.pdf

[4] http://www.codeproject.com/Articles/729759/Android-Sensor-Fusion-Tutorial

[5] http://en.wikipedia.org/wiki/Sensor_fusion

[6] http://en.wikipedia.org/wiki/Hidden_Markov_model

[7] https://software.intel.com/sites/default/files/managed/01/37/Context-Sensing-SDK_ReleaseNotes_v1.6.7.pdf

[8] https://software.intel.com/en-us/context-sensing-sdk

Useful links

https://source.android.com/devices/sensors/sensor-stack.html

https://graphics.ethz.ch/teaching/former/scivis_07/Notes/Slides/07-featureExtraction.pdf

 

Making Your Android* Application Login Ready Part II

$
0
0

Introduction

In part I we explored adding a login screen to our restaurant application and then customizing the rest of the application based on the access level of who logs in. That way, when the manager logs in, they can do their managerial tasks like editing the menu and analyzing restaurant sales, while the customer can see their coupons and reward points. Part I can be read here:

Making Your Android* Application Login Ready Part I

Now in part II we will cover sending and receiving calls to and from a server to handle the user login logic. That way the user will be able to log into any tablet in the restaurant or any other chain location. The users will be stored in a MongoDB* on the server side which can be accessed by their RESTful endpoints using the Spring* IO library. To learn more about the server component and how to set it up:

Accessing a REST Based Database Backend From an Android* App

Adding app-to-server communication adds another layer of complexity to our application. We need to add error handling for when the tablet has no internet connection and for when the server is offline in addition to HTTP errors.

Verify Internet Connection

Before the customer logins in and tries to connect to the server, we will want to verify that the device is connected. There is no point in trying to talk to a server, when the device itself isn’t even on the network. So before launching into the login screen, we check that the device has access to either Wi-Fi or cell connection. 

public Boolean wifiConnected(){
ConnectivityManager connManager = (ConnectivityManager) getSystemService(Context.CONNECTIVITY_SERVICE);
	NetworkInfo mWifi = connManager.getNetworkInfo(ConnectivityManager.TYPE_WIFI);
       	NetworkInfo mCellular = connManager.getNetworkInfo(ConnectivityManager.TYPE_MOBILE);
        	return (mWifi.isConnected() && mWifi.isAvailable()) || (mCellular.isConnected() && mCellular.isAvailable());
}

public void wifiNotConnected(){
	Intent intent = new Intent(LoginViewActivity.this, OrderViewDialogue.class);
	intent.putExtra(DIALOGUE_MESSAGE, getString(R.string.wifi_error));
	startActivity(intent);
	mUserFactory.logoutUser();
	mSignInButton.setEnabled(true);
	mRegisterButton.setEnabled(true);
	MainActivity.setCurrentUser(null);
	mSignInProgress= STATE_DEFAULT;
	LoginViewActivity.this.finish();
}

Code Example 1: Check data connection

We will do this in the onResume() method to ensure it is always checked before it starts the login activity. If wifi/data is connected, then we can launch the intent specific to the access level of the user who is logged in. 

Figure 1: Screenshot of the restaurant application’s manager portal

Async Task

To make the calls to the server, we don’t want to interfere with the rest of the application by using the main UI thread. Instead we will use an AsyncTask to asynchronously make the call in the background. This will be used to make the login call for the HTTP GET, the register call for HTTP PUT, the update call for HTTP POST, and the delete for HTTP DELETE.

To demonstrate how to use an AsyncTask, the following is how to set up the calls for the user login for the HTTP GET. When the user clicks login, we first retrieve the inputs and set up the AsyncTask as seen below. 

final String email=mUser.getText().toString();
                final String password=mPassword.getText().toString();
                new AsyncTask<String, Void, String>() {
		//… Async Methods
}.execute();

Code Example 2: Overview of AsyncTask for login call**

The Async methods we need are onPreExecute, doInBackground, onPostExecute, onCancelled. In the first method, we give the user feedback that the application is starting to log in by setting the status message and disabling the buttons from subsequent login attempts. We will also set up a Handler to cancel the task should the server take too long to respond, this will trigger the onCancelled method to be called.  

@Override
                    protected void onPreExecute() {
                        //set the state
                        mStatus.setText(R.string.status_signing_in);
                        mSignInProgress= STATE_IN_PROGRESS;
                        //disable subsequent log-in attempts
                        mSignInButton.setEnabled(false);
                        mRegisterButton.setEnabled(false);
                        //cancel the task if takes too long
                        final Handler handler = new Handler();
                        final Runnable r = new Runnable()
                        {
                            public void run()
                            {
                                cancel(true);
                            }
                        };
                        handler.postDelayed(r, 15000);
                    }

Code Example 3: AsyncTask onPreExecute() method **

The doInBackground is self-explanatory; this is where our method to communicate to the server is called which will all happen in a thread separate from the main UI thread. Hence the user is free to continue exploring and it won’t appear that the app has frozen. 

@Override
                    protected String doInBackground(String... params) {
                        String results="";
                        try {
                            mUserFactory.loginUserRestServer(email, password);
                        } catch (Exception e) {
                            results= e.getMessage();
                        }
                        return results;
                    }

Code Example 4: AsyncTask doInBackground() method**

Once the call to the server is complete and we get a response back, we move onto the onPostExecute method. Here we will handle displaying any errors to the user or informing them that they are now logged in. Note that setting the user variables in the code is done in the loginUserRestServer method that we called in the doInBackground(), you will see that explained later on in this article. 

@Override
                    protected void onPostExecute(String result) {
                        mSignInProgress= STATE_DEFAULT;
                        if((result!=null) && result.equals("")){
                            Intent intent = new Intent(LoginViewActivity.this, OrderViewDialogue.class);
                            intent.putExtra(DIALOGUE_MESSAGE, String.format(getString(R.string.signed_in_as),MainActivity.getCurrentUser().firstName));
                            startActivity(intent);
                        }else{
                            mStatus.setText(String.format(getString(R.string.status_sign_in_error),result));
                            mSignInButton.setEnabled(true);
                            mRegisterButton.setEnabled(true);
                        }
                    }

Code Example 5: AsyncTask onPostExecute() method**

Finally, in the onCancelled method, we will inform the user that there was error and enable the buttons again so the user can retry. 

@Override
                    protected void onCancelled(){
                        mStatus.setText("Error communicating with the server.");
                        mSignInButton.setEnabled(true);
                        mRegisterButton.setEnabled(true);
                    }

Code Example 6: AsyncTask onCancelled() method **

Server Calls

For the GET call our Spring IO server, we will search for the user’s login credentials in the database using a findByEmailAndPassword query method defined on the server side.  It will return a JSON response which will be parsed into a local user variable. Our handler also notifies our navigation drawer to update and display the user level specific options. If you examine the code below you will see that we send the password as is to the server, in the real world you should encrypt it with a PBKDF2 hash with salt at the very least. There are also various encryption libraries online or switch to an HTTPS capable server. We will also check for any input errors here, thus eliminating any delay by sending bad input to the server to evaluate. 

public void loginUserRestServer(String email, String password) throws Exception {
        if(email.length() == 0){
            throw new Exception("Please enter email.");
        }
        if(password.length()==0){
            throw new Exception("Please enter password.");
        }

        UserRestServer result = null;
        User user= new User();
        String url = "http://<server-ip>:8181/users/";
        RestTemplate rest = new RestTemplate();
        rest.getMessageConverters().add(new MappingJackson2HttpMessageConverter());

        try {
            String queryURL = url + "search/findByEmailAndPassword?name=" + email+"&password="+password;
            Users theUser = rest.getForObject(queryURL, Users.class);
            if (!(theUser.getEmbedded() == null)) {
                result = theUser.getEmbedded().getUser().get(0);
                user.setFirstName(result.getFirstName());
                user.setLastName(result.getLastName());
                user.setEmail(result.getEmail());
                user.setAccessLevel(result.getAccessLevel());
            } else {
                throw new Exception("No user found or password is incorrect");
            }
        }catch (Exception e) {
            if(e instanceof ResourceAccessException){
                throw new Exception("Connection to server failed");
            }else {
                throw new Exception(e.getMessage());
            }
        }
        MainActivity.setCurrentUser(user);
        Message input= new Message();
        mHandler.sendMessage(input);
    }

Code Example 6: Login/GET Call to Rest Based Database Backend server **

When there is a new user and they need to register, the application will send a POST call to our server to add them to the database. First we check that the email is not already taken by another user and then we create a new user object to add to the server. By default, we give the user customer access; an existing manager can then change their access later if needed through the application.  

    public void registerRestServer(String first, String last, String email, String password) throws Exception{
        if(first.length() == 0){
            throw new Exception("Please enter first name.");
        }
        if(last.length()==0){
            throw new Exception("Please enter last name.");
        }
        if(email.length()==0){
            throw new Exception("Please enter email.");
        }
        if(password.length()==0){
            throw new Exception("Please enter password.");
        }
        String url = "http://<server-ip>:8181/users/";
        RestTemplate rest = new RestTemplate();
        rest.getMessageConverters().add(new MappingJackson2HttpMessageConverter());

        try {
            String queryURL = url + "search/findByEmail?name=" + email;
            Users theUser = rest.getForObject(queryURL, Users.class);
            if (theUser.getEmbedded() == null) {
                UserRestServer myUser = new UserRestServer();
                myUser.setFirstName(first);
                myUser.setLastName(last);
                myUser.setEmail(email);
                myUser.setPassword(password);
                myUser.setAccessLevel(CUSTOMER_ACCESS);
                rest.postForObject(url,myUser,Users.class);
            } else {
                throw new Exception("User already exists");
            }
        }catch (Exception e) {
            if(e instanceof ResourceAccessException){
                throw new Exception("Connection to server failed");
            }else {
                throw new Exception(e.getMessage());
            }
        }
    }

Code Example 7: Register/POST Call to Rest Based Database Backend server**

For a manager updating a user’s access level, the PUT call requires the href of the user on the server. As our application doesn’t store any information on users besides the current user, we must do a GET call to server to find out the href first. 

public void updateUserAccessRestServer(String email, String accessLevel) throws Exception{
        if(email.length()==0){
            throw new Exception("Please enter email.");
        }
        if(accessLevel.length()==0){
            throw new Exception("Please enter accessLevel.");
        }

        String url = "http://<server-ip>:8181/users/";
        RestTemplate rest = new RestTemplate();
        rest.getMessageConverters().add(new MappingJackson2HttpMessageConverter());

        try {
            String queryURL = url + "search/findByEmail?name=" + email;
            Users theUser = rest.getForObject(queryURL, Users.class);
            if (!(theUser.getEmbedded() == null)) {
                theUser.getEmbedded().getUser().get(0).setAccessLevel(accessLevel);
                String urlStr = theUser.getEmbedded().getUser().get(0).getLinks().getSelf().getHref();
                rest.put(new URI(urlStr),theUser.getEmbedded().getUser().get(0));
            } else {
                throw new Exception("User doesn't exist");
            }
        }   catch (Exception e) {
            if(e instanceof ResourceAccessException){
                throw new Exception("Connection to server failed");
            }else {
                throw new Exception(e.getMessage());
            }
        }
    }

Code Example 8: Update/PUT Call to Rest Based Database Backend server**

And again for the remove call, we need the href to delete the user if we are the manager. If it is the customer removing their own account though, the app can just referenced the user’s data (except for the password which is not stored). 

public void removeUserRestServer(String email, String password, boolean manager) throws Exception{
        if(email.length()==0){
            throw new Exception("Please enter email.");
        }
        if(password.length()==0 && !manager){
            throw new Exception("Please enter password for security reasons.");
        }

        String url = "http://<server-ip>:8181/users/";
        RestTemplate rest = new RestTemplate();
        rest.getMessageConverters().add(new MappingJackson2HttpMessageConverter());

        try {
            String queryURL;
            String exception;
            String urlStr;
            if(manager) {
                queryURL = url + "search/findByEmail?name=" + email;
                exception= "User doesn't exist";
            }else{
                queryURL= url + "search/findByEmailAndPassword?name=" + email+"&password="+password;
                exception= "User doesn't exist or password is incorrect";
            }
            Users theUser = rest.getForObject(queryURL, Users.class);
            if (!(theUser.getEmbedded() == null)) {
                if(manager) {
                    urlStr = theUser.getEmbedded().getUser().get(0).getLinks().getSelf().getHref();
                }else{
                    urlStr = MainActivity.getCurrentUser().getHref();
                }
                rest.delete(new URI(urlStr));
            } else {

                throw new Exception(exception);
            }
        }   catch (Exception e) {
                if(e instanceof ResourceAccessException){
                    throw new Exception("Connection to server failed");
                }else {
                    throw new Exception(e.getMessage());
                }
            }
    }

Code Example 9: Remove/DELETE Call to Rest Based Database Backend server**

If you already have a regular HTTP server that you would like to use, below is some example code for the GET call. 

    public void loginUserHTTPServer(String email, String password) throws Exception {
        if(email.length() == 0){
            throw new Exception("Please enter email.");
        }
        if(password.length()==0){
            throw new Exception("Please enter password.");
        }

        User result = new User();
        DefaultHttpClient httpClient = new DefaultHttpClient();
        String url = "http://10.0.2.2:8080/user";

        HttpGet httpGet = new HttpGet(url);

        HttpParams params = new BasicHttpParams();
        params.setParameter("email", email);
        params.setParameter("password", password);
        httpGet.setParams(params);
        try {
            HttpResponse response = httpClient.execute(httpGet);

            String responseString = EntityUtils.toString(response.getEntity());
            if (response.getStatusLine().getStatusCode() != 200) {
                String error = response.getStatusLine().toString();
                throw new Exception(error);
            }
            JSONObject json= new JSONObject(responseString);
            result.setEmail(email);
            result.setFirstName(json.getString("firstName"));
            result.setLastName(json.getString("lastName"));
            result.setAccessLevel(json.getString("accessLevel"));
        } catch (IOException e) {
            throw new Exception(e.getMessage());
        }
        MainActivity.setCurrentUser(result);
        Message input= new Message();
        mHandler.sendMessage(input);
    }

Code Example 7: Login Call to a HTTP server**

Summary

This series of articles has covered how to add login capabilities to our restaurant application. We added a login screen for users and some special abilities for manager’s to manage the users and the menu. And now in part two the application can now talk to our server and login seamlessly across different tablets.

 

References

Making Your Android* Application Login Ready Part I

Accessing a REST Based Database Backend From an Android* App

Building Dynamic UI for Android* Devices

About the Author

Whitney Foster is a software engineer at Intel in the Software Solutions Group working on scale enabling projects for Android applications.

 

*Other names and brands may be claimed as the property of others.
**This sample source code is released under the Intel Sample Source Code License AgreementLike   SubscribeAdd new commentFlag as spam  .Flag as inappropriate  Flag as Outdated 

 

Quick Installation Guide for Media SDK on Windows with Intel® INDE

$
0
0

Intel® INDE provides a comprehensive toolset for developing Media applications targeting both CPU and GPUs, enriching the development experience of a game or media developer. Yet, if you got used to work with the legacy Intel® Media SDK or if you just want to get started using those tools quickly, you can follow these steps and install only the Media SDK components of Intel® INDE.

Go to the Intel® INDE Web page, select the edition you want to download and hit Download link:

At the Intel INDE downloads page select Online Installer (9 MB):

At the screen, where you need to select, which IDE to integrate Getting Started tools for Android* development, click Skip IDE Integration and uncheck the Install Intel® HAXM check box:

At the component selection screen, select only Media SDK for Windows, Media RAW Accelerator for Windows, Audio for Windows and Media for Mobile in the Analyze/Debug category (you are welcome to select any additional components that you need as well), and click Next. Installer will install all the Media SDK  components.

Complete the installation and restart your computer. Now you are ready to start analyzing your game or media application performance with Intel® Media SDK components!

If later you decide that you need to install additional components of the Intel® INDE suite, rerun the installer, select Modify option to change installed features:

and then you can select additional components that you need:

Complete the installation and restart your computer. Now you are ready to start using additional components of the Intel® INDE suite!

 

Intel® XDK FAQs - General

$
0
0
 How can I get started with Intel XDK?

There are plenty of videos and articles that you can go through here to get started. You could also start with some of our demo apps that you think fits your app idea best and learn or take parts from multiple apps.

Having prior understanding of how to program using HTML, CSS and JavaScript* is crucial to using Intel XDK. Intel XDK is primarily a tool for visualizing, debugging and building an app package for distribution. 

You can do the following to access our demo apps: 

  •  Select Project tab
  •  Select "Start a New Project"
  •  Select "Samples and Demos"
  •  Create a new project from a demo 

If you have specific questions following that, please post it to our forums.

 Can I use an external editor for development in Intel XDK? [Editor] 

Yes, you can open your files and edit them in your favorite editor. However, note that you must use Brackets* to use the "Live Layout Editing" feature. Also, if you are using App Designer (the UI layout tool in Intel XDK) it will make many automatic changes to your index.html file, so it is best not to edit that file externally at the same time you have App Designer open.

Some popular editors among our users include:

  • Sublime Text* (Refer to this article for information on the Intel XDK plugin for Sublime Text*)
  • Notepad++* for a lighweight editor
  • Jetbrains* editors (Webstorm*)
  • Vim* the editor
 How do I get code refactoring capability in Brackets*, the code editor in Intel® XDK? [Editor] 

You will have to add the "Rename JavaScript* Identifier" extension and "Quick Search" extension in Brackets* to achieve some sort of refactoring capability. You can find them in Extension Manager under File menu.

 Why doesn’t my app show up in Google* play for tablets? [Device] 

It could be that your app is using all the plugins in the project tab. Only include plugins that you need to minimize required permissions. For example, the intel.xdk.device plugin includes SMS permission which then implies that you need a mobile phone feature, which most tablets do not have. 

 What is the global-settings.xdk file and how do I locate it?

global-settings.xdk is a file that contains information about all your projects in Intel XDK along with settings related to panels under each tab (Emulate, Debug etc). For example, you can set the emulator to auto-refresh or no-auto-refresh. However, users are advised to modify this at their own risk and to always keep a backup of the original.

You can locate the file at:

[Mac OSX*] ~/Library/Application Support/XDK/global-settings.xdk

[Windows*] %localappdata% (or) %localappdata%\XDK

[Linux*] ~/.config/XDK/global-settings.xdk
 When do I use the intelxdk.js, xhr.js and cordova.js libraries?

The intelxdk and xhr libraries are only needed with legacy build tiles. The Cordova* library is needed for all. When building with Cordova* tiles, intelxdk and xhr libraries are ignored and so they can be omitted.

 What is the process if I need a .keystore file? [Keystore] 

Please send an email to html5tools@intel.com specifying the email address associated with your Intel XDK account in its contents.

 How do I rename my project that is a duplicate of an existing project?

Make a copy of your existing project directory and delete the .xdk and .xdke files from them. Import it into Intel XDK using the ‘Import your HTML5 Code Base’ option and give it a new name to create a duplicate.

 How do I try to recover when Intel XDK won't start or hangs?
  • If you are running Intel XDK on Windows* it must be Windows* 7 or higher. It will not run reliably on earlier versions.   
  • Delete the "project-name.xdk" file from the project directory that Intel XDK is trying to open when it starts (it will try to open the project that was open during your last session), then try starting Intel XDK. You will have to "import" your project into Intel XDK again. Importing merely creates the "project-name.xdk" file in your project directory and adds that project to the "global-settings.xdk" file.
  • Rename the project directory Intel XDK is trying to open when it starts. Create a new project based on one of the demo apps. Test Intel XDK using that demo app. If everything works, restart Intel XDK and try it again. If it still works, rename your problem project folder back to its original name and open Intel XDK again (it should now open the sample project you previously opened). You may have to re-select your problem project (Intel XDK should have forgotten that project during the previous session).
  • Clear Intel XDK's program cache directories and files. 
    On a [Windows*] machine this can be done using the following on a standard command prompt (administrator not required) :
    > cd %AppData%\..\Local\XDK
    > del *.* /s/q
    To locate the "XDK cache" directory on [OS X*] and [Linux*] systems, do the following:
    $ sudo find / -name global-settings.xdk
    $ cd <dir found above>
    $ sudo rm -rf *
    You might want to save a copy of the "global-settings.xdk" file before you delete that cache directory and copy it back before you restart Intel XDK. Doing so will save you the effort of rebuilding your list of projects. Please refer to this question for information on how to locate the global-settings.xdk file.
  • If you save the "global-settings.xdk" file and restored it in the step above and you're still having hang troubles, try deleting the directories and files above, along with the "global-settings.xdk" file and try it again.
  • Do not store your project directories on a network share (Intel XDK currently has issues with network shares that have not yet been resolved). This includes folders shared between a Virtual machine (VM) guest and its host machine (for example, if you are running Windows* in a VM running on a Mac* host). This network share issue is a known issue with a fix request in place.

Please refer to this post for more details regarding troubles in a VM. It is possible to make this scenario work but it requires diligence and care on your part.

  • There have also been issues with running behind a corporate network proxy or firewall. To check them try running Intel XDK from your home network where, presumably, you have a simple NAT router and no proxy or firewall. If things work correctly there then your corporate firewall or proxy may be the source of the problem.
  • Issues with Intel XDK account logins can also cause Intel XDK to hang. To confirm that your login is working correctly, go to the Intel XDK App Center and confirm that you can login with your Intel XDK account. While you are there you might also try deleting the offending project(s) from the App Center.

If you can reliably reproduce the problem, please send us a copy of the "xdk.log" file that is stored in the same directory as the "global-settings.xdk" file to mailto:html5tools@intel.com.

 Is Intel XDK an open source project? How can I contribute to the Intel XDK community? 

No, It is not an open source project. However, it utilizes many open source components that are then assembled into Intel XDK. While you cannot contribute directly to the Intel XDK integration effort, you can contribute to the many open source components that make up Intel XDK.  

The following open source components are the major elements that are being used by Intel XDK: 

  •  Node-Webkit
  •  Chromium
  •  Ripple* emulator
  •  Brackets* editor
  •  Weinre* remote debugger
  •  Crosswalk*
  •  Cordova*
  •  App Framework*
 How do I configure Intel XDK to use 9 patch png for Android* apps splash screen?

Intel XDK does support the use of 9 patch png for Android* apps splash screen. You can read up more at http://developer.android.com/tools/help/draw9patch.html on how to create a 9 patch png image. We also plan to incorporate them in some of our sample apps to illustrate their use.

 How do I stop AVG from popping up the "General Behavioral Detection" window when Intel XDK is launched? [Security] 

You can try adding nw.exe as the app that needs an exception in AVG.

 What do I specify for "App ID" in Intel XDK under Build Settings?

Your app ID uniquely identifies your app. For example, it can be used to identify your app within Apple’s application services allowing you to use things like in-app purchasing and push notifications.

Here are some useful articles on how to create an App ID for your

iOS* App

Android* App

Windows* Phone 8 App

 Is it possible to modify Android* Manifest through Intel XDK? [Android*] 

You cannot modify the AndroidManifest.xml file directly with our build system, as it only exists in the cloud. However, you may do so by creating a dummy plugin that only contains a plugin.xml file which can then be add to the AndroidManifest.xml file during the build process. In essence, you need to change the plugin.xml file of the locally cloned plugin to include directives that will make those modifications to the AndroidManifext.xml file. Here is an example of a plugin that does just that:  

<?xml version="1.0" encoding="UTF-8"?><plugin xmlns="http://apache.org/cordova/ns/plugins/1.0" id="com.tricaud.webintent" version="1.0.0"><name>WebIntentTricaud</name><description>Ajout dans AndroidManifest.xml</description><license>MIT</license><keywords>android, WebIntent, Intent, Activity</keywords><engines><engine name="cordova" version=">=3.0.0" /></engines><!-- android --><platform name="android"><config-file target="AndroidManifest.xml" parent="/manifest/application"><activity android:configChanges="orientation|keyboardHidden|keyboard|screenSize|locale" android:label="@string/app_name" android:launchMode="singleTop" android:name="testa" android:theme="@android:style/Theme.Black.NoTitleBar"><intent-filter><action android:name="android.intent.action.SEND" /><category android:name="android.intent.category.DEFAULT" /><data android:mimeType="*/*" /></intent-filter></activity></config-file></platform></plugin>

You can check the AndroidManifest.xml created in the apk, using the apktool with the command line:  

aapt l -M appli.apk >text.txt  

This adds the list of files of the apk and details of the AndroidManifest.xml to text.txt.

 How can I share my Intel XDK app build? [Build] 

You can send a link to your project via an email invite from your project settings page. However, a login to your account is required to access the file behind the link. Alternatively, you can download the build from the build page, onto your workstation, and push that built image to some location from which you can send a link to that image. 

 Why does my iOS build fail when I am able to test it successfully on a device and the emulator? [iOS*/Build] 

Common reasons include:

  • Your App ID specified in the project settings do not match the one you specified in Apple's developer portal.
  • The provisioning profile does not match the cert you uploaded. Double check with Apple's developer site that you are using the correct and current distribution cert and that the provisioning profile is still active. Download the provisioning profile again and add it to your project to confirm.
  • In Project Build Settings, your App Name is invalid. It should be modified to include only alpha, space and numbers.
 How do I add multiple domains in Domain Access? 

Here is the primary doc source for that feature.

If you need to insert multiple domain references, then you will need to add the extra references in the intelxdk.config.additions.xml file. This StackOverflow entry provides a basic idea and you can see the intelxdk.config.*.xml files that are automatically generated with each build for the <access origin="xxx" /> line that is generated based on what you provide in the "Domain Access" field of the "Build Settings" panel on the Project Tab. 

 How do I build more than one app using the same Apple developer account? [iOS*] 

On Apple developer, create a distribution certificate using the "iOS* Certificate Signing Request" key downloaded from Intel XDK Build tab only for the first app. For subsequent apps, reuse the same certificate and import this certificate into the Build tab like you usually would.

 How do I include search and spotlight icons as part of my app? [iOS*] 

Please refer to this article in Intel XDK documentation. Create an intelxdk.config.additions.xml file in your top level directory (same as the other intelxdk.*.config.xml files) and add the following lines for supporting icons in Settings and other areas in iOS*.

<platform name="ios"><!-- iOS* 7.0+ --><!-- iPhone / iPod Touch  --><icon src="res/ios/icon-60.png" width="60" height="60" /><icon src="res/ios/icon-60@2x.png" width="120" height="120" /><!-- iPad --><icon src="res/ios/icon-76.png" width="76" height="76" /><icon src="res/ios/icon-76@2x.png" width="152" height="152" /><!-- iOS* 6.1 --><!-- Spotlight Icon --><icon src="res/ios/icon-40.png" width="40" height="40" /><icon src="res/ios/icon-40@2x.png" width="80" height="80" /><!-- iPhone / iPod Touch --><icon src="res/ios/icon.png" width="57" height="57" /><icon src="res/ios/icon@2x.png" width="114" height="114" /><!-- iPad --><icon src="res/ios/icon-72.png" width="72" height="72" /><icon src="res/ios/icon-72@2x.png" width="144" height="144" /><!-- iPhone Spotlight and Settings Icon --><icon src="res/ios/icon-small.png" width="29" height="29" /><icon src="res/ios/icon-small@2x.png" width="58" height="58" /><!-- iPad Spotlight and Settings Icon --><icon src="res/ios/icon-50.png" width="50" height="50" /><icon src="res/ios/icon-50@2x.png" width="100" height="100" /></platform>

For more information related to these configurations, visit http://cordova.apache.org/docs/en/3.5.0/config_ref_images.md.html#Icons%20and%20Splash%20Screens.

For accurate information related to iOS icon sizes, visit https://developer.apple.com/library/ios/documentation/UserExperience/Conceptual/MobileHIG/IconMatrix.html

Note: The iPhone 6 icons will only be available if iOS* 7 or 8 is the target. 

Cordova iOS* 8 support JIRA tracker: https://issues.apache.org/jira/browse/CB-7043

 Does Intel XDK support Modbus TCP communication?

No, since Modbus is a specialized protocol, you need to write either some JavaScript* or native code (in the form of a plugin) to handle the Modbus transactions and protocol.

 How do I sign an Android* app using an existing keystore? [Android*/Keystore] 

Uploading an existing keystore in Intel XDK is not currently supported but you can send an email to html5tools@intel.com with this request. We can assist you there.

 How do I build separately for different Android* versions? [Android*] 

Under the Projects Panel, you can select the Target Android* version under the Build Settings collapsible panel. You can change this value and build your application multiple times to create numerous versions of your application that are targeted for multiple versions of Android*.

 How do I display the 'Build App Now' button if my display language is not English? [Build] 

If your display language is not English and the 'Build App Now' button is proving to be troublesome, you may change your display language to English which can be downloaded by a Windows* update. Once you have installed the English language, proceed to Control Panel > Clock, Language and Region > Region and Language > Change Display Language.

Back to FAQs Main  


Intel® XDK FAQs - Cordova

$
0
0
 How do I set app orientation?

If you are using Cordova* 3.X build options (Crosswalk* for Android*, Android*, iOS*, etc.), you can set the orientation under the Projects panel > Select your project > Cordova* 3.X Hybrid Mobile App Settings - Build Settings. Under the Build Settings, you can set the Orientation for your desired mobile platform.  

If you are using the Legacy Hybrid Mobile App Platform build options (Android*, iOS* Ad Hoc, etc.), you can set the orientation under the Build tab > Legacy Hybrid Mobile App Platforms Category- <desired_mobile_platform> - Step 2 Assets tab. 

[iPad] Create a plugin (directory with one file) that only has a config xml that includes the following: 

<config-file target="*-Info.plist" parent="UISupportedInterfaceOrientations~ipad" overwrite="true"><string></string></config-file><config-file target="*-Info.plist" parent="UISupportedInterfaceOrientations~ipad" overwrite="true"><array><string>UIInterfaceOrientationPortrait</string></array></config-file> 

Add the plugin on the build settings page. 

Alternatively, you can use this plugin: https://github.com/yoik/cordova-yoik-screenorientation. You can import it as a third-party Cordova* plugin using the Cordova* registry notation:

  •  net.yoik.cordova.plugins.screenorientation (includes latest version at the time of the build)
  •  net.yoik.cordova.plugins.screenorientation@1.3.2 (specifies a version)

Or, you can reference it directly from the GitHub repo: 

The second reference provides the git commit referenced here (we do not support pulling from the PhoneGap registry).

 Is it possible to create a background service using Intel XDK?

Background services require the use of specialized Cordova* plugins that need to be created specifically for your needs. Intel XDK does not support development or debug of plugins, only the use of them as "black boxes" with your HTML5 app. Background services can be accomplished using Java on Android or Objective C on iOS. If a plugin that backgrounds the functions required already exists (for example, this plugin for background geo tracking), Intel XDK’s build system will work with it.

 How do I send an email from my App?
You can use the Cordova* email plugin or use web intent - PhoneGap* and Cordova* 3.X.
 How do you create an offline application?
You can use the technique described here by creating an offline.appcache file and then setting it up to store the files that are needed to run the program offline. Note that offline applications need to be built using the Cordova* or Legacy Hybrid build options.
 How do I work with alarms and timed notifications?
Unfortunately, alarms and notifications are advanced subjects that require a background service. This cannot be implemented in HTML5 and can only be done in native code by using a plugin. Background services require the use of specialized Cordova* plugins that need to be created specifically for your needs. Intel XDK does not support the development or debug of plugins, only the use of them as "black boxes" with your HTML5 app. Background services can be accomplished using Java on Android or Objective C on iOS. If a plugin that backgrounds the functions required already exists (for example, this plugin for background geo tracking) the Intel XDK’s build system will work with it. 
 How do I get a reliable device ID? [Device]
You can use the Phonegap/Cordova* Unique Device ID (UUID) plugin for Android*, iOS* and Windows* Phone 8. 
 How do I implement In-App purchasing in my app? [Plugin]
There is a Cordova* plugin for this. A tutorial on its implementation can be found here. There is also a sample in Intel XDK called ‘In App Purchase’ which can be downloaded here.
 How do I install custom fonts on devices?
Fonts can be considered as an asset that is included with your app, not shared among other apps on the device just like images and CSS files that are private to the app and not shared. It is possible to share some files between apps using, for example, the SD card space on an Android* device. If you include the font files as assets in your application then there is no download time to consider. They are part of your app and already exist on the device after installation.
 How do I access the device’s file storage? [Plugin]
You can use HTML5 local storage and this is a good article to get started with. Alternatively, there is a Cordova* file plugin for that.
 Why isn't AppMobi* push notification services working? [Plugin]
This seems to be an issue on AppMobi’s end and can only be addressed by them. PushMobi is only available in the "legacy" container. AppMobi* has not developed a Cordova* plugin, so it cannot be used in the Cordova* build containers. Thus, it is not available with the default build system. We recommend that you consider using the Cordova* push notification plugin instead.
 How do I configure an app to run as a service when it is closed?
If you want a service to run in the background you'll have to write a service, either by creating a custom plugin or writing a separate service using standard Android* development tools. The Cordova* system does not facilitate writing services.
 How do I dynamically play videos in my app?

1) Download the Javascript and CSS files from https://github.com/videojs

2) Add them in the HTML5 header. 

<config-file target="*-Info.plist" parent="UISupportedInterfaceOrientations~ipad" overwrite="true"><string></string></config-file><config-file target="*-Info.plist" parent="UISupportedInterfaceOrientations~ipad" overwrite="true"><array><string>UIInterfaceOrientationPortrait</string></array></config-file> 

 3) Add a panel ‘main1’ that will be playing the video. This panel will be launched when the user clicks on the video in the main panel.

<div class=”panel” id=”main1” data-appbuilder-object=”panel” style=””><video id=”example_video_1” class=”video-js vjs-default-skin” controls=”” preload=”auto” width=”200” poster=”camera.png” data-setup=”{}”><source src=”JAIL.mp4” type=”video/mp4”><p class=”vjs-no-js”>To view this video please enable JavaScript*, and consider upgrading to a web browser that <a href=http://videojs.com/html5-video-support/ target=”_blank”>supports HTML5 video</a></p></video><a onclick=”runVid3()” href=”#” class=”button” data-appbuilder-object=”button”>Back</a></div>

 4) When the user clicks on the video, the click event sets the ‘src’ attribute of the video element to what the user wants to watch. 

Function runVid2(){

      Document.getElementsByTagName(“video”)[0].setAttribute(“src”,”appdes.mp4”);

      $.ui.loadContent(“#main1”,true,false,”pop”);

}

 5) The ‘main1’ panel opens waiting for the user to click the play button. 

Note: The video does not play in the emulator and so you will have to test using a real device. The user also has to stop the video using the video controls. Clicking on the back button results in the video playing in the background.

 How do I design my Cordova* built Android* app for tablets? [Android*]
This page lists a set of guidelines to follow to make your app of tablet quality. If your app fulfills the criteria for tablet app quality, it can be featured in Google* Play's "Designed for tablets" section.
 How do I resolve icon related issues with Cordova* CLI build system? [Build]

Ensure icon sizes are properly specified in the intelxdk.config.additions.xml. For example, if you are targeting iOS 6, you need to manually specify the icons sizes that iOS* 6 uses. 

<icon platform="ios" src="images/ios/72x72.icon.png" width="72" height="72" /><icon platform="ios" src="images/ios/57x57.icon.png" width="57" height="57" />

These are not required in the build system and so you will have to include them in the additions file. 

For more information on adding build options using intelxdk.config.additions.xml, visit: https://software.intel.com/en-us/html5/articles/adding-special-build-options-to-your-xdk-cordova-app-with-the-intelxdk-config-additions-xml-file
 Is there a plugin I can use in my App to share content on social media? [Plugin]

Yes, you can use the PhoneGap Social Sharing plugin for Android*, iOS* and Windows* Phone.

 Iframe does not load in my app. Is there an alternative? [Plugin]
Yes, you can use the inAppBrowser plugin instead.
 Why are intel.xdk.istablet and intel.xdk.isphone not working? [Plugin]
Those properties are quite old and is based on the legacy AppMobi* system. An alternative is to detect the viewport size instead. You can get the user’s screen size using screen.width and screen.height properties (refer to this article for more information) and control the actual view of the webview by using the viewport meta tag (this page has several examples). You can also look through this forum thread for a detailed discussion on the same.
 How do I work with the App Security plugin on Intel XDK? [Plugin]

Select the App Security plugin on the plugins list of the Project tab and build your app as a Cordova Hybrid app. Building it as a Legacy Hybrid app has been known to cause issues when compiled and installed on a device.

 Why does my build fail with Admob plugins? Is there an alternative? [Plugin]

Intel XDK does not support the library project that has been newly introduced in the com.google.playservices@21.0.0 plugin. Admob plugins are dependent on "com.google.playservices", which adds Google* play services jar to project. The "com.google.playservices@19.0.0" is a simple jar file that works quite well but the "com.google.playservices@21.0.0" is using a new feature to include a whole library project. It works if built locally with Cordova CLI, but fails when using Intel XDK.

To keep compatible with Intel XDK, the dependency of admob plugin should be changed to "com.google.playservices@19.0.0".

 Why does the intel.xdk.camera plugin fail? Is there an alternative? [Plugin]
There seem to be some general issues with the camera plugin on iOS*. An alternative is to use the Cordova camera plugin, instead and change the version to 0.3.3.
 How do I resolve Geolocation issues with Cordova? [Plugin]

Give this app a try, it contains lots of useful comments and console log messages. However, use Cordova 0.3.10 version of the geo plugin instead of the Intel XDK geo plugin. Intel XDK buttons on the sample app will not work in a built app because the Intel XDK geo plugin is not included. However, they will partially work in the Emulator and Debug. If you test it on a real device, without the Intel XDK geo plugin selected, you should be able to see what is working and what is not on your device. There is a problem with the Intel XDK geo plugin. It cannot be used in the same build with the Cordova geo plugin. Do not use the Intel XDK geo plugin as it will be discontinued. 

Geo fine might not work because of the following reasons:

  1. Your device does not have a GPS chip
  2. It is taking a long time to get a GPS lock (if you are indoors)
  3. The GPS on your device has been disabled in the settings

Geo coarse is the safest bet to quickly get an initial reading. It will get a reading based on a variety of inputs, but is usually not as accurate as geo fine but generally accurate enough to know what town you are located in and your approximate location in that town. Geo coarse will also prime the geo cache so there is something to read when you try to get a geo fine reading. Ensure your code can handle situations where you might not be getting any geo data as there is no guarantee you'll be able to get a geo fine reading at all or in a reasonable period of time. Success with geo fine is highly dependent on a lot of parameters that are typically outside of your control.

 Is there an equivalent Cordova* plugin for intel.xdk.player.playPodcast? If so, how can I use it? [Plugin]

Yes, there is and you can find the one that best fits the bill from the Cordova* plugin registry.

To make this work you will need to do the following: 

  • Detect your platform (you can use uaparser.js or you can do it yourself by inspecting the user agent string)
  • Include the plugin only on the Android* platform and use <video> on iOS*.
  • Create conditional code to do what is appropriate for the platform detected 

You can force a plugin to be part of an Android* build by adding it manually into the additions file. To see what the basic directives are to include a plugin manually:

  1. Include it using the "import plugin" dialog, perform a build and inspect the resulting intelxdk.config.android.xml file.
  2. Then remove it from your Project tab settings, copy the directive from that config file and paste it into the intelxdk.config.additions.xml file. Prefix that directive with <!-- +Android* -->. 

More information is available here and this is what an additions file can look like:

<preference name="debuggable" value="true" /><preference name="StatusBarOverlaysWebView" value="false" /><preference name="StatusBarBackgroundColor" value="#000000" /><preference name="StatusBarStyle" value="lightcontent" /><!-- -iOS* --><intelxdk:plugin intelxdk:value="nl.nielsad.cordova.wifiscanner" /><!-- -Windows*8 --><intelxdk:plugin intelxdk:value="nl.nielsad.cordova.wifiscanner" /><!-- -Windows*8 --><intelxdk:plugin intelxdk:value="org.apache.cordova.statusbar" /><!-- -Windows*8 --><intelxdk:plugin intelxdk:value="https://github.com/EddyVerbruggen/Flashlight-PhoneGap-Plugin" />

This sample forces a plugin included with the "import plugin" dialog to be excluded from the platforms shown. You can include it only in the Android* platform by using conditional code and one or more appropriate plugins.

 How do I display a webpage in my app without leaving my app?

The most effective way to do so is by using inAppBrowser.

 Does Cordova* media have callbacks in the emulator?

While Cordova* media objects have proper callbacks when using the debug tab on a device, the emulator doesn't report state changes back to the Media object. This functionality has not been implemented yet. Under emulation, the Media object is implemented by creating an <audio> tag in the program under test. The <audio> tag emits a bunch of events, and these could be captured and turned into status callbacks on the Media object.

Back to FAQs Main 

Intel® XDK FAQs - Debug & Test

$
0
0
 What are the requirements for Testing on Wi-Fi? [Testing]

1) Both Intel XDK and App Preview mobile app must be logged in with the same user credentials.

2) Both devices must be on the same subnet.

Note: Your computer's Security Settings may be preventing Intel XDK from connecting with devices on your network. Double check your settings for allowing programs through your firewall. At this time, testing on Wi-Fi does not work within virtual machines.

 How do I configure app preview to work over Wi-Fi? [App Preview]

1) Ensure that both Intel XDK and App Preview mobile app are logged in with the same user credentials and are on the same subnet

2) Launch App Preview on the device 

3) Log into your Intel XDK account 

4) Select "Local Apps" to see a list of all the projects in Intel XDK Projects tab 

5) Select desired app from the list to run over Wi-Fi  

Note: Ensure the app source files are referenced from the right source directory. If it isn't, on the Projects Tab, change the 'source' directory so it is the same as the 'project' directory and move everything in the source directory to the project directory. Remove the source directory and try to debug over local Wi-Fi.

 How do I clear app preview cache and memory? [App Preview]

[Android*] Simply kill the app running on your device as an Active App on Android* by swiping it away after clicking the "Recent" button in the navigation bar. Alternatively, you can clear data and cache for the app from under Settings App > Apps > ALL > App Preview. 

[iOS*] By double tapping the Home button then swiping the app away. 

[Windows*] You can use the Windows* Cache Cleaner app to do so.

 What are the Android* devices supported by App Preview? [App Preview/Android*]

We officially only support and test Android* 4.x and higher, although you can use Cordova for Android* to build for Android* 2.3 and above. For older Android* devices, you can use the build system to build apps and then install and run them on the device to test. To help in your testing, you can include the weinre script tag from the Test tab in your app before you build your app. After your app starts up, you should see the Test tab console light up when it sees the weinre script tag contact the device (push the "begin debugging on device" button to see the console). Remember to remove the weinre script tag before you build for the store. 

 What do I do if Intel XDK stops detecting my Android* device? [Debug/Android*]

When Intel XDK is not running, kill all adb processes that are running on your workstation and then restart Intel XDK as conflicts between different versions of adb frequently causes such issues. Ensure that applications such as Eclipse that run copies of adb are not running. You may scan your disk for copies of adb: 

[Linux*/OS X*]:

$ sudo find / -name adb -type f 

[Windows*]:

> cd \> dir /s adb.exe

For more information on Android* USB debug, visit the Intel XDK documentation on debugging and testing.

 My third party plugins do not show up on the debug tab. How do I debug an app that contains third party plugins? [Debug]

If you are using the Debug, Emulate or Test tabs you will not see any third-party plugins. At the moment, the only way to debug an app that contains a third-party plugin is to build it and debug the built app installed on your device. We are working on a solution to work with third-party plugins, but it is still in development. 

[Android*]

1) For Crosswalk* or Cordova for Android* build, create an intelxdk.config.additions.xml file that contains the following lines: 

<!-- Change the debuggable preference to true to build a remote CDT debuggable app for --><!-- Crosswalk* apps on Android* 4.0+ devices and Cordova apps on Android* 4.4+ devices. --><preference name="debuggable" value="true" /><!-- Change the debuggable preference to false before you build for the store. --> 

and place it in the root directory of your project (in the same location as your other intelxdk.config.*.xml files). Note that this will only work with Crosswalk* on Android* 4.0 or newer devices or, if you use the standard Cordova for Android* build, on Android* 4.4 or greater devices.

2) Build the Android* app

3) Connect your device to your development system via USB and start app

4) Start Chrome on your development system and type "chrome://inspect" in the Chrome URL bar. You should see your app in the list of apps and tabs presented by Chrome, you can then push the "inspect" link to get a full remote CDT session to your built app. Be sure to close Intel XDK before you do this, sometimes there is interference between the version of adb used by Chrome and that used by Intel XDK, which can cause a crash. You might have to kill the adb process before you start Chrome (after you exit the Intel XDK). 

[iOS*]

Refer to the instructions on the updated Debug tab docs to get on-device debugging. We do not have the ability to build a development version of your iOS* app yet, so you cannot use this technique to build iOS* apps. However, you can use the weinre script from the Test tab into your iOS* app when you build it and use the Test tab to remotely access your built iOS* app. This works best if you include a lot of console.log messages.

[Windows* 8]

You can use the test tab which would give you a weinre script. You can include it in the app that you build, run it and connect to the weinre server to work with the console.  

Alternatively, you can use App Center to setup and access the weinre console (go here and use the "bug" icon).  

Another approach is to write console.log messages to a <textarea> screen on your app. See either of these apps for an example of how to do that:  

 Why does my device show as offline on Intel XDK Debug? [Debug] 
“Media” mode is the default USB connection mode, but due to some unidentified reason, it frequently fails to work over USB on Windows* machines. Configure the USB connection mode on your device for "Camera" instead of "Media" mode.
 What do I do if my remote debugger does not launch? [Debug] 

You can try the following to have your app run on the device via debug tab:  

  • Place the intelxdk.js library before the </body> tag
  • Place the app specific .js file after it
  • Place the call of initialization in the device ready event function
 Why does it fail when I try to push files to the server for debugging on device? [Debug]

This problem seems to happen primarily on Android* 4.0 devices and is being looked into by our engineers.

Back to FAQs Main 

Intel® XDK FAQs - App Designer

$
0
0
 Is there a tool to auto-build elements to see how it is done?

Yes, you may use the App Starter tool put together by the author of App Framework*. It is very useful for learning how to build elements with App Framework*. If you wish to design your project layout using this tool, you will need to copy the layout files it creates to your Intel XDK project directories and turn your project into a non-App Designer project.

 What does the Google* Map widget’s"center type" attribute and its values 'Auto calculate', 'Address' and 'Lat/Long' mean? 

That parameter defines how the map view is centered in your div. It is used to initialize the map as follows:  

  • Lat/Long: center the map on a specific latitude and longitude (that you provide on the properties page)
  • Address: center the map on a specific address (that you provide on the properties page)
  • Auto Calculate: center the map on a collection of markers 

Note: This is just for initialization of the map widget. Beyond that you must use the standard Google* maps APIs to move and/or modify the map. See the "google_maps.js" code for initialization of the widget and some calls to the Google* maps APIs. There is also a pointer to the Google* maps API at the beginning of the JS file.  

To get the current position, you have to use the Geo API, and then push that into the Maps API to display it. Google* Maps API will not give you any device data, it will only display information for you. Please refer to this sample app for some help with Geo API. There are a lot of useful comments and console.log messages.

 How do I size UI elements in my project?

Trying to implement "pixel perfect" user interfaces with HTML5 apps is not recommended as there is a wide array of device resolutions and aspect ratios and it is impossible to insure you are sized properly for every device. Instead, you can research the use of "responsive web design" techniques to build your UI so that it adapts to different sizes automatically. You can also look into media query to see how you can use that.  

Note: The viewport is sized in CSS pixels (aka virtual pixels or device independent pixels) and so the physical pixel dimensions are not what you will normally be designing for.

 How do I create lists, buttons and other UI elements in Intel XDK?

Intel XDK provides you with a way to build HTML5 apps that are run in a webview on the target device. This is analogous to running in an embedded browser (refer to this blog for details). Thus, the programming techniques are the same as those you would use inside a browser, if you were writing a single-page client-side HTML5 app. You can use Intel XDK’s App Designer to drag and drop UI elements.

 Why is the user interface for Chrome on Android* unresponsive? [App Framework*] 

It could be that you are using an outdated version of the App Framework* files. You can find the recent versions here. You can safely replace any App Framework* files that App Designer installed in your project with more recent copies as App Designer will not overwrite the new files.

 How do I work with more recent versions of App Framework* since the latest Intel XDK release? [App Framework*]

You can replace the App Framework* files that the Intel XDK automatically inserted with more recent versions that can be found here. App designer will not overwrite your replacement.

 Is there a replacement to XPATH in App Framework* for selecting nodes from an XML document? [App Framework*]

App Framework* is a UI library that implements a subset of the jQuery* selector library. If you wish to use jQuery* for XPath manipulation, it is recommend that you use jQuery* as your selector library and not App Framework*. However, it is also possible to use jQuery* with the UI components of App Framework*. Please refer to this entry in the App Framework* docs.

It would look similar to this:

<script src="lib/jq/jquery.js"></script><script src="lib/af/jq.appframework.js"></script><script src="lib/af/appframework.ui.js"></script>
 Why does my App Framework* app that was previously working suddenly start having issues (Android* 4.4.4)?  [App Framework*]

Ensure you have upgraded to the latest version of App Framework*. If your app is built with "legacy" then try using the Cordova build and set the "Targeted Android* Version" to 19. The legacy build targets Android* 4.2.

 How do I manually set a theme? [App Framework*]

If you want to, for example, change the theme only on Android*, you can add the following lines of code:

  1. $.ui.autoLaunch = false; //Stop the App Framework* auto launch right after you load App Framework*
  2. Detect the underlying platform using either navigator.userAgent or intel.xdk.device.platform or window.device.platform. If the platform detected is Android*, set $.ui.useOSThemes=false to disable custom themes and set <div id=”afui” class=”android light”>
  3. Otherwise, set $.ui.useOSThemes=true;
  4. When device ready and document ready have been detected, add $.ui.launch();

 

 

Back to FAQs Main 

Intel® XDK FAQs - IoT

$
0
0
 Can Intel XDK IoT edition run alongside the regular Intel XDK?

Yes, it can. Since the IoT edition is a superset of the standard edition, it contains the same features as the standard edition with some additional features for IoT development. It is alright to have them both installed. However, you cannot run the IoT edition alongside the regular edition. This is expected to be fixed in future releases. 

 Why isn't the Emulate tab visible?

The Emulate tab will only be visible for projects that are created from "Start with a Template", "Work with a Demo", "Import an Existing HTML5 Project", and "Start with App Designer" options under the App Developer Projects section.

 How do I use WebSevice API in my IoT project from main.js?

The main.js file is no different from a typical JavaScript file besides the fact that it is used in the context of node.js. You can create a simple http server that serves up an index.html. The index.html file should contain a reference to the JavaScript files that update the HTML DOM elements with the relevant Web Services data as you would with a typical HTML5 project. The only difference here is that you are accessing the index.html (HTML5 application) from the http server function in the main.js file. The Web Services enabled application would be accessible through your browser since you will need to access it using the IoT device's IP address.

You can find more information here

Back to FAQs Main 

Intel® XDK FAQs - Crosswalk

$
0
0
 How do I play audio with different playback rates?

Building your app with Crosswalk* allows you to play audio files with different playback rates.

Here is a code snippet that allows you to specify playback rate:

var myAudio = new Audio('/path/to/audio.mp3');

myAudio.play();

myAudio.playbackRate = 1.5;
 Why are Intel XDK's Android* build files so large? [Build]

If your app has been built with Crosswalk*, it will be a minimum of 15-18MB in size because it includes a complete web browser to use instead of the built-in webview on the device. Despite the size, this is the preferred solution for Android*, because the built-in webviews on the majority of Android* devices are inconsistent. 

Changing the code base from "gold" to "lean" will reduce the size but the "gold” option only applies to older legacy builds. Investing time and effort in the legacy build system is not recommended as it will be obsolete sometime in 2015 and cannot take advantage of the numerous Cordova plugins that are available for the Cordova and Crosswalk* build systems.

 Why does my Android* Crosswalk* build fail with com.google.playservices plugin? [Plugin]
Intel XDK does not support the library project which has been newly introduced in the com.google.playservices@21.0.0 plugin. To keep it compatible with Intel XDK, you will have to use "com.google.playservices@19.0.0".
 Why is the size of my installed app much larger than the apk for a Crosswalk* application? [General]
This is because the apk is a compressed image. When Crosswalk* starts it will create some data files for caching purposes which will also increase the installed size of the application.
 Why does my app fail to run on some devices? [General]
There are some Android* tablets where the GPU hardware/software does not work properly. This could be due to the poor design or improper testing by the manufacturer. Your device might fall under this category.

Back to FAQs Main 

Intel® XDK FAQs

$
0
0

General 

Here you can find general questions and answers related to Intel XDK like how to get started, what .xdk files do, how to share your app build etc.

Cordova

Here you can find answers to Cordova related questions like plugin issues and alternatives, creating an offline app, designing your app for tablets etc.

Crosswalk

Here you can find questions and answers related to Crosswalk, a HTML5 runtime that allows you to deploy your app with its own runtime eliminating dependency on the device's native webview.

Debug & Test

Here you can find questions on testing and debugging your app using Intel XDK and App Preview.


App Designer

Here you can find questions on how to use App Designer, the UI layout tool in Intel XDK and App Framework, an open-source HTML5 UI framework.


IoT

Here you can find questions and answers related to Intel XDK IoT Edition, a development environment with the capability to create node.js* applications for Intel IoT platforms.

 

Mi-Corporation and Intel Boost Productivity with Device-Specific User Experiences

$
0
0

Read the case study

To realize its full potential, a mobile business application must be tailored to the resources provided by the device it runs on. UIs that flexibly respond to factors such as the screen size and input devices available can help foster intuitive user experiences; apps that seem poorly matched to the platform may create frustration and hamper productivity. Usability on Ultrabook™ and 2-in-1 devices exemplifies the opportunity for delivering advantages through software design that responds to the capabilities of the target system.

Building on more than 15 years of experience delivering cross-platform mobile solutions, Mi-Corporation recognized the opportunity associated with support for the mode-switching technology of 2-in-1 devices. Intel engineers and the company’s chief technical officer Chris DiPierro collaborated to enable the Mi-Forms product with a robust experience that takes advantage of the target device’s capabilities, whether it is functioning as a laptop or as a tablet.

Challenge

Mi-Corporation wanted to provide mobile workers with the best possible user experience for data capture on Ultrabooks and 2-in-1 systems. To do so, it needed to enhance efficiency and usability with changes to UI elements and input methods specific to whether the target device is in laptop mode or tablet mode.

Solution

Intel provided engineering support and development resources to Mi-Corporation that helped the company tailor the operation of the Mi-Forms product according to the usage mode of the 2-in-1 target device.

  • Software features were identified through a collaborative engineering engagement between Intel and Mi-Corporation, laying the groundwork for the optimization of the Mi-Forms solution for 2-in-1 devices.
  • Technical guidance was given through resources from Intel, including articles, best practices, and other documentation available from Intel® Developer Zone.
  • A Dell XPS target system was provided to Mi-Corporation by Intel as a software-development platform to validate the outcome of the enabling effort.

Benefits

Mi-Forms is now differentiated within its market segment by its ability to adapt dynamically to either laptop mode or tablet mode on a 2-in-1 device. By optimizing the use of UI elements such as the on-screen keyboard, menus, and toolbars, as well as input devices such as pen and touch, the software presents an intelligent environment to the user. In addition, by being featured on Intel websites and at Intel events, the Mi-Forms product has received valuable publicity.

Read the case study:
Mobile Workers: Using 2 in 1 state, touch, and other input to enable e-Form Productivity [PDF 2.12 MB]


ART vs Dalvik* - Introducing the New Android* x86 Runtime

$
0
0

One of the most significant Android* 5.x changes is the shift to the relatively new way of executing applications called Android Runtime (ART). The option to use ART has been available since the Android 4.4 (KitKat) release. KitKat users had a choice between ART and its predecessor Dalvik. Now ART is the only runtime environment in Android Lollipop.

ART and Dalvik runtimes are compatible running the same Dex bytecode, therefore apps that are developed for Dalvik should work fine when running with ART. But ART has a number of specific differences that would be explained in this article.

Let’s consider the major features implemented in ART.

Ahead-of-Time Compilation

The main feature that makes ART different from Dalvik is the Ahead-Of-Time (AOT) compilation paradigm. According to the AOT concept, DEX bytecode translation happens only once, while the app is installing on the device. It brings real benefits in contrast with Dalvik’s Just-In-Time (JIT) compilation approach, which translates code every time you run an app. Here is an article where you can find more information on how AOT compilation changes the performance, battery life, installation time, and storage footprint of your users’ devices.

Garbage Collection

Another improvement in ART is memory management. Garbage collection (GC) is a critical process for performance because it can affect user-experience. ART’s garbage collector contains some enhanced features that can outperform Dalvik’s GC.

First, the new GC enumerates all allocated objects and marks all reachable objects in only one pause while Dalvik’s GC pauses twice.

Second, parallelization of the mark-and-sweep algorithm enables the app to reduce pause time noticeably.

Third, ART has a lower total GC time for certain cases where cleaning up recently-allocated, short-lived objects is required.

Fourth, ART makes concurrent GC timelier. As a result, the app does not have to stop if it attempts to allocate memory when the heap is already full.

The final change in memory management is the emergence of the compacting GC feature. Sometimes OutOfMemoryErrors occur not due to the app being out of memory, but due to the absence of any separate block of suitable size to process the app’s request. Such errors are the reason why Android Open-Source Project (AOSP) is developing the compacting GC for ART. Compacting GC merges freed-up single blocks in one location of memory, which can be easily allocated.

It’s a very useful feature, but while compacting GC is still in the development stage, there are some restrictions, especially, for apps with Java* Native Interface (JNI). Android advises developers to avoid incompatible operations and pay more attention to the pointers. Also, it’s wise to use CheckJNI to catch potential errors.

Development & Debugging

The last enhanced ART feature concerns the area of application development and debugging.

ART has added support for a dedicated sampling profiler. Previously, the graphical viewer for execution logs, called TraceView, was commonly used for profiling Android apps. But using this tool had a negative impact on the run-time performance. Sampling profilers measure operating system interrupts. Hence, using a sampling profiler is­­ less intrusive to apps and has many side effects compared to other approaches. This new dedicated profiler added to TraceView now provides an accurate picture of app behavior while apps are running at their natural speed without significant slowdown.

Also, ART supports several new debugging options like monitoring locks, calculating live instances in some classes, setting a field watchpoint to stop the app’s execution when a specific event occurs, and so on.

The other improvement to ART that can accelerate development is clearer diagnostic details in exceptions and crash reports. The latest versions of Dalvik had expanded exception details for java.lang.ArrayIndexOutOfBoundsException and java.lang.ArrayStoreException. ART gives detailed information for java.lang.ClassCastException, java.lang.ClassNotFoundException, and java.lang.NullPointerException.

java.lang.NullPointerException: Attempt to write to field 'int android.accessibilityservice.AccessibilityServiceInfo.flags' on a null object reference

java.lang.NullPointerException: Attempt to invoke virtual method 'java.lang.String java.lang.Object.toString()' on a null object reference

More information is available in the article: Verifying App Behavior on the Android Runtime (ART).

Summary

The AOT compilation, garbage collection, development, and debugging improvements, and the other features discussed here make ART a great improvement over Dalvik. Modern technologies enable devices to be multicore, with large memory and storage capacity to be compatible with ART requirements. Furthermore, some ART functionality, for instance, compacting GC is now under development. The new runtime is at the forefront of Google’s development, which means it will likely see expanded and upgraded capabilities.

References

Are you ready to build 64-Bit Applications for Android* Using 64-bit Emulator Images?

$
0
0

Introduction

Download  How-to-use-64-bits-Emulator-Image.pdf

Mobile development on Android* is mainly focused on 64-bit systems. Smartphones with 64-bit Android have better performance. That’s why more smartphones use 64-bit Android and the number of 64-bit applications increases every day. Any developer can write 64-bit applications, but some developers don’t have a 64-bit device to validate their applications on. Android L Emulator is made to solve this problem. If you don’t have a full 64-bit-supported device, you can test your applications in an emulator. Luckily, Google announced the availability of the 64-bit Android L emulator for Intel® x86 architecture.

Download and Install of the Emulator

To use the 64-bit Android Emulator you must download Android Studio. It includes:

  • Android Studio IDE
  • Android SDK tools
  • Android 5.0 Platform
  • Android 5.0 Emulator

Before you set up Android Studio, be sure you have installed JDK 6 or higher (the JRE alone is not sufficient). JDK 7 is required when developing for Android 5.0 and higher. To check if you have the JDK installed (and which version), open a terminal and type “javac –version”. If the JDK is not available or the version is lower than 6, download JDK. On some Windows* systems, the launcher script cannot find where Java* is installed. If you encounter this problem, you need to set an environment variable indicating the correct location:

Select Start menu > Computer > System Properties > Advanced System Properties. Then open Advanced tab > Environment Variables and add a new system variable “JAVA_HOME” that points to your JDK folder, for example “C:\Program Files\Java\jdk1.8.0_05”. Android Studio is now ready and loaded with the Android developer tools.

Using the Android Virtual Device Manager

The Android SDK includes a virtual mobile device emulator that runs on your computer. The emulator lets you prototype, develop, and test Android applications without using a physical device.

The Android emulator mimics all of the hardware and software features of a typical mobile device, except that it cannot place actual phone calls. It provides a variety of navigation and control keys, which you can "press" using your mouse or keyboard to generate events for your application. It also provides a screen in which your application is displayed, together with any other active Android applications.

The emulator’s interface (shown below) is easy to understand.


Figure 1.

 

In Figure 1’s window you see the list of known device definitions. You can use one of them to create an Android Virtual Device or you can create device definitions yourself by clicking “Create Device” (Figure 2).


Figure 2.

 

Choose the parameters for your new device and click “Create Device” again. The new device will appear in the list of device definitions (Figure 3).


Figure 3.

 

To create a new Android Virtual Device click “Create AVD”. A dialog box will appear allowing you to choose the desired settings for your new device such as name of device, parameters of camera, and storage (Figure 4).


Figure 4.

 

Then press “ok”. After closing this window you can start creating devices by pressing “Start”.


Figure 5.

Starting and Stopping the Android Emulator

During development and testing of your application, you install and run your application in the Android emulator. You can launch the emulator as a standalone application from a command line, or you can run it from within your Android Studio development environment. In either case, you specify the AVD configuration to load and any startup options you want to use, as described earlier in this document.

You can run your application on a single instance of the emulator or, depending on your needs, you can start multiple emulator instances and run your application in more than one emulated device. You can use the emulator's built-in commands to simulate GSM phone calling or SMS between emulator instances, and you can set up network redirection that allows emulators to send data to one another. For more information, see Telephony Emulation, SMS Emulation, and Emulator Networking.

To start an instance of the emulator from the command line, navigate to the tools/ folder of the SDK. Enter the emulator command like this:

emulator -avd <avd_name> [<options>]

This initializes the emulator, loads an AVD configuration, and displays the emulator window. For more information about command line options for the emulator, see the Android Emulator tool link in the Resources section. When you run your app from Android Studio, it installs and launches the app on your connected device or emulator (launching the emulator, if necessary). You can specify emulator startup options in the Run/Debug dialog on the Target tab.

Conclusion

The Android emulator is an application that provides a virtual mobile device on which you can run your Android applications. It runs a full Android system stack, down to the kernel level, that includes a set of preinstalled applications (such as the dialer) that you can access from your applications. You can choose what version of the Android OS you want to run in the emulator by configuring AVDs, and you can also customize the mobile device skin and key mappings. When launching the emulator and at runtime, you can use a variety of commands and options to control its behavior.

The Android system images available through the Android SDK Manager contain code for the Android Linux* kernel, the native libraries, the Dalvik* VM, and the various Android packages (such as the Android framework and preinstalled applications). The emulator provides dynamic binary translation of device machine code to the OS and processor architecture of your development machine. The Android emulator supports many hardware features commonly found on mobile devices

Resources

  • Read about 64-bit Android* and Android Run Time here.
  • How to Develop and Evaluate 64-bit Android* Apps on Intel® x86 Platforms you can read here.
  • For more information about 64-Bit Android* OS here.
  • Read about Android L Emulator here.
  • Read about Managing Virtual Devices here.

About the Author

Egor Filimonov works in the Software & Services Group at Intel Corporation. He is a student of Lobachevsky State University in Nizhni Novgorod, Russia and majors in mechanics and mathematics. His specialty is applied mathematics and informatics. His main interest is HPC (High Performance Computing) and mobile technologies.

Intel(R) System Studio Developer Story : With Intel ® JTAG debugger and MinnowBoard MAX, how to debug exception errors in the Android-Linux-Kernel.

$
0
0

 

Intel(R) System Studio Developer Story : With XDB and MinnowBoard MAX , how to debug exception errors in the Android-Linux-Kernel.

  In this article, we can see how to debug and check the exception error in Android Linux Kernel in Intel ® Architecture-based system with Intel ® JTAG Debugger which is a part of tool Intel System Studio ® Ultimate Edition. In doing so, we are supposed to see what is the JTAG and Intel ® JTAG Debugger and some information of the exception handling of Intel ® Architecture-based system as well. We are going to use the MinnowBoard MAX as Intel ® Architecture-based target system.

  1. JTAG overview

  JTAG stands for Joint Test Action Group and is pronounced to jay-tag but, which is normally meaning IEEE std 1149.1-1990 IEEE Standard Test Access Port and Boundary-Scan Architecture. This standard is to do debug and test SoC (System On Chip) and Microprocessor Software.

  The configuration of a JTAG debugging is consist of three parts ; JTAG Debugger Software in a host machine, JTAG probe and On chip debug(OCD) in SoC. 

  1.1 JTAG Debugger Software

  JTAG Debugger is a software tool in a host machine. It is getting addresses and data from JTAG probe and showing it to user and user can send data and address to JTAG probe via USB or other PC connectives as vice versa.  By using this tool, user can do run-control and source line debug with loaded symbol of the image - the binary image is downloaded to target system - such as run, stop, step into, step over, set break point and an accessing memory is possible as well. So user can easily do debugging the SW of target system and inspect a system memory and registers. Intel System Studio ® Ultimate Edition has Intel ® JTAG debugger (a.k.a. XDB) for host side JTAG debugger software.

  1.2 JTAG Probe (or JTAG Adapter)

 JTAG Probe is the HW box which converts JTAG signals to PC connectivity signals such as USB, parallel, RS-232, Ethernet. USB is most popular one and many of JTAG Probe is using the USB as a connection to host PC. Even though there is minimal standard JTAG pin numbers, a target side interface has many variations - e.g. ARM 10-pin, ST 14-pin, OCDS 16-pin, ARM 20-pin. Intel ® JTAG debugger and MinnowBoard MAX configurations which is used in this article has 60-pin connection with a target.Intel ® ITP-XDP3 probe is used as the JTAG probe for MinnowBoard MAX. Intel ® JTAG debugger is also compatible with JTAG probe from other vendors such as Macraigor® Systems usb2Demon® , OpenOCD.

  1.3 On Chip Debug (Target SoC)

  The main component of OCD is TAP (Test Access Point) and TDI(Test Data In) / TDO(Test Data Out). By using TAP we can reset or read/write register and bypass and the main technology of JTAG is Boundary Scan by TDI/TDO signal line (Click for more details and picture).

< Figure 1-1> Configuration of JTAG probe and target system - Lure is the small pin adapter for Intel ®  ITP-XDP3 and MinnowBoard MAX.

  2. Overview of  Exception in Intel Architecture

  An exception is a synchronous event that is generated when the processor detects one or more predefined 
conditions while executing an instruction. The IA-32 architecture specifies three classes of exceptions: faults, 
traps, and aborts. Normally faults and traps are recoverable while abort does not allow a restart of the program. When there is exception, it is processed as same way as interrupt handling. Which means that after halting and save current process then system switches to the exception handler and comes back again once an exception handling is done. 

 < Table 2-1 > Protected-Mode Exceptions and Interrupts 

 

 3. Prepare the MinnowBoard MAX and Intel® ITP-XDP3 with a host PC connection via USB

 You need to set up MinnowBoard MAX with Android OS. For this, please see the "Intel(R) System Studio Developer Story : How to configure, build and profile the Linux Kernel of Android by using the VTune" article (Please click). It has the introduction of MinnowBoard MAX and how to set up / build / download Android OS in MinnowBoard MAX. 

 Connect MinnowBoard MAX with the lure (which is small PCB with 60 pin JTAG connector) to Intel ®  ITP-XDP3 JTAG probe and Intel ® ITP-XDP3 to a host PC via USB. And host PC should have been installed Intel ® System Studio Ultimate Edition for the USB driver of Intel ® ITP-XDP3. 

<Figure 3-1> Connections of MinnowBoard MAX, Intel ® ITP-XDP3 JTAG probe and Intel ® JTAG debugger (XDB) on the host PC.

 4. Using a Intel ® JTAG debugger (XDB) for exceptions of Android Kernel on MinnowBoard MAX.

  We see the step by step procedure of using Intel ® JTAG debugger to check and debug the exception in a Kernel.

(1) Run Intel ® JTAG debugger : Go to the Installed directory and run the batch file. (e.g. start_xdb_legacy_products.bat).

(2) Connect to the target : Go to the Intel ® JTAG debugger menu - File - Connect and select Intel ® ITP-XDP3 and Z3680, Z37xx.

     

(3) Load the symbol files and set the directory of source files. Go to the  Intel ® JTAG debugger menu - File - Load / Unload Symbol and set the symbol files. Per source files, go to the  Intel ® JTAG debugger menu - Options - Source Directories and set the rule and directories. Rule is to adjust files directory between current source path and path in the symbol file which recorded in compile time.

(4) Browse to the entry file which has exception handler :  Intel ® JTAG debugger menu - View - Source files and open the entry_64.S file.

(5) Set break point in the exception entry point : Go and find the ENTRY(error_entry) which is entry point of exception with an error code in rax register. And each exception handler is defined as zeroentry or errorentry macros, so you can set break point in the error_entry or some specific handler. In this article, we are using the "zeroentry invalid_op do_invalid_op" for testing.

ENTRY(error_entry)
	XCPT_FRAME
	CFI_ADJUST_CFA_OFFSET 15*8
	/* oldrax contains error code */
	cld
	movq_cfi rdi, RDI+8
	movq_cfi rsi, RSI+8
	movq_cfi rdx, RDX+8
	movq_cfi rcx, RCX+8
	movq_cfi rax, RAX+8
	movq_cfi  r8,  R8+8
	movq_cfi  r9,  R9+8
	movq_cfi r10, R10+8
	movq_cfi r11, R11+8
	movq_cfi rbx, RBX+8
	movq_cfi rbp, RBP+8
	movq_cfi r12, R12+8
	movq_cfi r13, R13+8
	movq_cfi r14, R14+8
	movq_cfi r15, R15+8
	xorl %ebx,%ebx
	testl $3,CS+8(%rsp)
	je error_kernelspace
error_swapgs:
	SWAPGS
error_sti:
	TRACE_IRQS_OFF
	ret<....>
zeroentry divide_error do_divide_error
zeroentry overflow do_overflow
zeroentry bounds do_bounds
zeroentry invalid_op do_invalid_op
zeroentry device_not_available do_device_not_available
paranoiderrorentry double_fault do_double_fault
zeroentry coprocessor_segment_overrun do_coprocessor_segment_overrun
errorentry invalid_TSS do_invalid_TSS
errorentry segment_not_present do_segment_not_present
zeroentry spurious_interrupt_bug do_spurious_interrupt_bug
zeroentry coprocessor_error do_coprocessor_error
errorentry alignment_check do_alignment_check
zeroentry simd_coprocessor_error do_simd_coprocessor_error

(6) Examples : make an exception and check if the handler got it when we set break point : Set break point to the "zeroentry invalid_op do_invalid_op" and call the BUG() which makes the "Invalid Opcode" fault by ud2 instruction.

#define BUG()							\
do {								\
	asm volatile("ud2");					\
	unreachable();						\
} while (0)

< Call the BUG() >

You add the BUG() macro in your test code of Kenel to make an exception. (In this examples, I added it in keyboard.c to make an exception by special key input sequences. 

< Stop at the Invalid_op of break point >

You set the break point the exception handler of Invalid opcode or the entry of exception handler. Then you can see and debug where this exception comes from.

5. Conclusion 

 Some exceptions are critical error of system hardware and software, so it is important what / why / where these kind of exceptions occur. By using Intel ® JTAG debugger, you can easily check it and can do more investigation of these issues. Because Intel ® JTAG debugger provide powerful features like easily accessing the assembly code and source code and checking the call stack and registers.

6. References 

Intel® 64 and IA-32 Architectures Software Developer’s Manual

jtag 101 ieee 1149.x and software debug

 

Intel® VTune™ Amplifier Tutorials

$
0
0

The following tutorials are quick paths to start using the Intel® VTune™ Amplifier. Each demonstrates an end-to-end workflow you can ultimately apply to your own applications.

NOTE:

  • These tutorials apply to the VTune Amplifier XE starting from version 2013 and higher and to the VTune Amplifier for Systems from version 2014 and higher.

  • Apart from the analysis and target configuration details, most of the VTune Amplifier XE tutorials are also applicable to the VTune Amplifier for Systems. The Finding Hotspots on the Intel Xeon Phi coprocessor tutorial is applicable only to the VTune Amplifier XE.

VTune Amplifier XE Tutorials

Take This Short TutorialLearn To Do This

Finding Hotspots
Duration: 10-15 minutes

C++ Tutorial
Windows* OS: HTML | PDF
Linux* OS: HTML | PDF
Sample code: tachyon_vtune_amp_xe

Fortran Tutorial
Windows* OS: HTML | PDF
Linux* OS: HTML | PDF
Sample code: nqueens_fortran

Identify where your application is spending time, detect the most time-consuming program units and how they were called.

Finding Hotspots on the Intel® Xeon Phi™ Coprocessor
Duration: 10-15 minutes

C++ Tutorial
Windows* OS: HTML | PDF
Linux* OS: HTML | PDF
Sample code: matrix_vtune_amp_xe

Identify where your native Intel Xeon Phi coprocessor-based application is spending time, estimate code efficiency by analyzing hardware event-based metrics.

Analyzing Locks and Waits
Duration: 10-15 minutes

C++ Tutorial
Windows* OS: HTML | PDF
Linux* OS: HTML | PDF
Sample code: tachyon_vtune_amp_xe

Identify locks and waits preventing parallelization.

Identifying Hardware Issues
Duration: 10-15 minutes

C++ Tutorial
Windows* OS: HTML | PDF
Linux* OS: HTML | PDF
Sample code: matrix_vtune_amp_xe

Identify the hardware-related issues in your application such as data sharing, cache misses, branch misprediction, and others.

VTune Amplifier for Systems Tutorials

Take This Short TutorialLearn To Do This

Finding Hotspots on a Remote Linux* System
Duration: 10-15 minutes

C++ Tutorial
Linux* OS: HTML | PDF
Sample code: tachyon_vtune_amp_xe

Configure and run a remote Advanced Hotspots analysis on a Linux target system.

Finding Hotspots on an Android* Platform
Duration: 10-15 minutes

C++ Tutorial
Windows* OS: HTML | PDF
Linux* OS: HTML | PDF
Sample code: Tachyon.apk

Configure and run a remote Basic Hotspots analysis on an Android target system.

Intel(r) CCF 3.0.13 Release

$
0
0

March 20th, 2015

Intel® CCF 3.0.13

This release of Intel® Common Connectivity Framework SDK version 3.0 provides support for developing applications on Intel® CCF Version 3.0, and it includes the framework, tools, documentation and sample applications for creating applications on Windows*, Android*, and iOS*.

This release also contains several bug fixes, security improvements as well as overall stability improvements over the previous release of Intel® CCF.

NOTE: After 4/6/2015 the current cloud server will be replaced.  The CCF v3.0.13 will use the new server and the old server will no longer be available.  CCF 3.0 applications using cloud must update prior to 4/6/2015 to continue using cloud.  During the transition period the new servers will interoperate with the existing servers.  Applications that do not use cloud features will be unaffected.

Limitations

The limitations of this release include:

  • Applications developed using Intel® CCF 3.0 PV1, PV2 and PV3 releases will not be able to discover users with CCF applications using CCF 3.0.13 over the Cloud.
  • Wi-Fi direct support is disabled

Support

Developer support for CCF is available through Intel® CCF Developer Forum

Viewing all 554 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>