Office
product

Home    arrow   Products   arrow  Software Tools   arrow  Intel Software Tools   arrow  Intel Neural Compute Stick 2


Product Details

Intel Neural Compute Stick 2

Image of Intel Neural Compute Stick 2

Intel_Neural_Compute_Stick

Intel Neural Compute Stick 2,Prototype and deploy deep neural network (DNN) applications smarter and more efficiently with a tiny, fanless, deep learning development kit designed to enable a new generation of intelligent devices.




Product Availability:   AGAINST PO, WITHIN 3-4 WEEKS
arrow
Features
  • Processor: Intel® Movidius™ Myriad™ X Vision Processing Unit (VPU)
  • Supported frameworks: TensorFlow* and Caffe*
  • Connectivity: USB 3.0 Type-A
  • Dimensions: 2.85 in. x 1.06 in. x 0.55 in. (72.5 mm x 27 mm x 14 mm)Operating temperature: 0° C to 40° C
  • Compatible operating systems: Ubuntu* 16.04.3 LTS (64 bit), CentOS* 7.4 (64 bit), and Windows® 10 (64 bit)

Quick Download arrow

OpenVINO-Product-Brief-Nov-2018

Specifications

Technical Specification :

Processor:  Intel® Movidius™ Myriad™ X Vision Processing Unit (VPU)
Supported frameworks: TensorFlow* and Caffe*
Connectivity:  USB 3.0 Type-A
Dimensions:  2.85 in. x 1.06 in. x 0.55 in. (72.5 mm x 27 mm x 14 mm)
Operating temperature:  0° C to 40° C
Compatible operating systems: Ubuntu* 16.04.3 LTS (64 bit), CentOS* 7.4 (64 bit), and Windows® 10 (64 bit)
Intel® Distribution of OpenVINO™ toolkit - Develop Multiplatform Computer Vision Solutions :

Develop applications and solutions that emulate human vision with the Intel® Distribution of OpenVINO™ toolkit. Based on convolutional neural networks (CNN), the toolkit extends workloads across Intel® hardware (including accelerators) and maximizes performance.

  • Enables CNN-based deep learning inference at the edge
  • Supports heterogeneous execution across computer vision accelerators—CPU, GPU, Intel® Movidius™ Neural Compute Stick, and FPGA—using a common API
  • Speeds up time to market via a library of functions and preoptimized kernels
  • Includes optimized calls for OpenCV and OpenVX*
What's New in the 2018 R5 Release :
  • Extends neural network support to include long short-term memory (LSTM) from ONNX*, TensorFlow*, MxNet frameworks, and 3D convolutional-based networks in a preview mode (CPU only) to support additional, new use cases beyond computer vision.
  • Introduces the Neural Network Builder API in a preview mode, which provides the flexibility to create a graph from simple API calls. Directly deploy it using the Inference Engine without loading intermediate representation (IR) files.
  • Delivers a significant boost in CPU performance, especially on multicore systems, through new parallelization techniques.
  • Provides INT8-based primitives for Intel® Advanced Vector Extensions-512, Intel® Advanced Vector Extensions 2, and Single Instruction Multiple Data (SIMD) extensions (SSE4.2) platforms that deliver optimized performance on Intel® Xeon®, Intel® Core™, and Intel® Atom processors.
  • Supports Raspberry Pi* hardware as a host for the Intel Movidius Neural Compute Stick 2 (preview). Offload your deep learning workloads seamlessly to this low-cost, low-power USB stick that's based on the Intel® Movidius™ Myriad™ X technology. It also supports the previous generation.
  • Adds three optimized pretrained models (a total of 30 in the toolkit):
  • Text detection of indoor and outdoor scenes
  • Two single-image, super-resolution networks to enhance the resolution of an input image by a factor of three or four

Using Intel® Distribution for Python :

You can:

  • Achieve faster Python* application performance—right out of the box—with minimal or no changes to your code
  • Accelerate NumPy*, SciPy*, and scikit-learn* with integrated Intel® Performance Libraries such as Intel® Math Kernel Library and Intel® Data Analytics Acceleration Library
  • Access the latest vectorization and multithreading instructions, Numba* and Cython*, composable parallelism with Threading Building Blocks, and more

Intel® Distribution for Python* is included in our flagship product, Intel® Parallel Studio XE. This powerful, robust suite of software development tools has everything you need to write Python native extensions: C and Fortran compilers, numerical libraries, and profilers. Help boost application performance by taking advantage of the ever-increasing processor core counts and vector register widths available in processors based on technology from Intel and other compatible processors. The 30-day trial includes online customer support.

What’s New :

This release offers many performance improvements, including:

·         Faster machine learning with scikit-learn key algorithms accelerated with Intel® Data Analytics Acceleration Library

·         The latest TensorFlow* and Caffe* libraries optimized for Intel® architecture

·         The XGBoost package included in the Intel® Distribution for Python (Linux* only)

Also replace what’s new in Intel Parallel Studio with below contents:

What’s New – 2019 version :
  • Boost application efficiency and performance for Intel Core and Xeon processors with new and enhanced capabilities in compilers, performance libraries, and analysis tools. Vectorize and thread your code (using OpenMP*) to take full advantage of the latest SIMD-enabled hardware, including Intel AVX-512. Accelerate diverse workloads across enterprise to cloud, and HPC to AI.
  • Improve performance through greater scalability and reduced latency with next-generation Intel MPI Library.
  • Interactively build, validate, and visualize parallel algorithms with Intel Advisor’s Flow Graph Analyze.
  • Faster machine learning with Python and scikit-learn* in Intel Distribution for Python.
  • Enhanced roofline analysis capabilities and simplified application profiling workflow with a new intuitive user interface in Intel VTune Amplifier.
  • Stay up-to-date with industry standards and IDEs including full C++14 and expanded C++17; full Fortran 2008 and substantial Fortran 2018 support; full OpenMP* 4.5 and Initial OpenMP 5.0 draft; Python 2.7 and 3.6; and Microsoft Visual Studio* 2017 integration.