Man vs. AI - Who will win the Aleph Zero Race?

To celebrate the launch of Imagry’s robust real-time driving policy planner, Aleph Zero, we will be hosting the first ever man vs. AI car race, where you will be able to race against Aleph Zero.

 

The race will take place at CES in Las Vegas, Jan 9th-11th between at Imagry’s booth, Tech West: Sands Expo (Sands) Stand # 51702 Hall G. Public access to the race will be from 16:00-18:00.

 

The “best human driver” winner in the Aleph Zero Race will be announced at Imagry’s LinkedIn at the end of the race. The winner will be awarded with the renowned GeForce GTX 1060, provided by NVIDIA.

 

Got what it takes? Join the Aleph Zero Race!


Meet us at CES 2018

Meet us at CES

You are invited to meet us at CES and see our autonomous car driving using cameras-only, which amounts to a fraction of the cost of traditional Lidar, Radar &HD GPS-based solutions.

Schedule time with our team to learn about our vision for the next level of autonomous driving by email nitsa@imagry.co.

Have only few minutes? jump to our booth to say hi and take a glimpse of our demos (Sands Expo Stand # 51702 Hall G, Eureka Park marketplace).

 

See you in Vegas !


Deep Q-learning on Torcs with Resnet-18

There is much interest in Deep Reinforcement Learning lately, and there is a lot of examples of it on the net. Most of Deep RL algorithm use convolutional network. Nevertheless there is prevalent opinion that only shallow (2-3 convolution layers) convolutional networks are easy to train for Reinforcement Learning. It could be correct or not, but there is an workaround for this problem – take pretrained deep network and freeze most of it, leaving only upper part of it to train. Frozen part work as feature detector without trainable parameters.

We did experiment on Torcs using modified gym-torcs framework.
The input state was purely image-based (otherwise it wouldn’t make sense to use Resnet obviously)
Some modification was made to make gym-torcs to produce images with higher resolution, more suitable for Resnet input. Original Resnet has RGB input. We used three input channels of former RGB image to stack thre consecutive greyscale images. Following common practice for image-based neural network driver we used difference between consecutive visual images produced by Torcs, instead of original images.
We added 3 fc layer on top of resnet
Most of Resnet-18 was frozen. Only last block and fully-connected layers on top of it were trained.
After that we trained Resnet-18 to drive the car with discrete Q-Learning algorithm.(Simple explanation of Deep Q-learning could be found here).
###
Resnet-18 produce steering angle, brake/acceleration was produced in gym_torcs by simple controller to make speed approximately constant.
###
Original Resnet-18 use Batch Normalization layer, but frozen net don’t need Batch Normalization. Because our Resnet-18 is mostly frozen Batch Normalization layers were merged into convolutional layers.
Training was done using Caffe framework by Stochastic Gradient Descent.

Discretization – more or less?
3 vs 11

Target network

As per common practice we using replay buffer and two network – current network which driving car and target network, changing changing slowly per time. Parameters of target network updated as running average of current network
$$\theta^- = (1-\tau) \theta^- + \tau \theta \\
\theta \ \ \text{- parameters of current training network,}
\\ \theta^- \ \ \text{- parameter of target (averaged by time) network}$$

Torcs problem
Lag spikes (packet loss?)
Inconsistent road (-1,1)
crossing white line (few – no train) hard shadows create this prob
rebond

Loking into future Importance of Gamma
Our total discounted reward is
$$R = \sum_i r_i \ \gamma^{i-1} \\ \gamma \ \text{is discount parameter}$$
Lets’s look at the limiting case
$$\gamma = 0, \ R = r_1$$
In that case total reward is just reward for current action, and for discrete action reinforcement learning become regression. Obviously simple regression is much more easy then reinforcement learning.
That indicate that smaller Camma more easy to train on.
Gamma parameter could be very important for training. Gamma close to 1 can produce erratic behavior in the middle of training, because some erratic trajectory produce better reward then more simple trajectory. If this erratic trajectory is stable it can persist for long time. That kind of behavior encourage choice of “minimal” gamma – minimal gamma which provide stable predictive policy

Positive reward

We are training in environment with “termination” condition. Then reaching terminating state training episode stop. In such an environment is important for reward function be nonnegotiable on whole reachable state space. If reward is negative on some undesirable part of state space network can choose to train terminate training instead of trying to exit it and accumulate more negative reward. Termination = zero reward for infinite amount of steps, which network would prefer over many-step negative reward. Network would train to suicide instead of continue suffering.

N-step Q learning

It happens that simple Q-learning is very slow to train. To accelerate training we used n-step Q-learning.
We form of n-step Q-learning we were using standard Q learning loss, euclidean distance between current Q value and target estimation
$$loss=(Q(s_t, a_t, \theta) – Q_y)^2$$
Standard n-step learning use target
$$Q_{y}^{nstep} = r_t + \gamma r_{t+1} + \gamma^2 r_{t+2} + \gamma^3 r_{t+2} +…+V_{s_{t+n}, \theta^-}$$
or
$$Q_{y,t-1}^{nstep} = r_{t-1} + \gamma \ Q_{y,t}^{nstep}$$
However to better account for exploration noise we can use
$$Q_{y,t-1}^{expl} = r_{t-1} + \gamma \ max(V_{s_t, \theta^-}, \ Q_{y,t}^{expl})$$
by adding max with corresponding V recursively before each new reward.
or unrolling it
$$Q_y^{expl} = r_t + \gamma \ max(V_{s_{t+1}, \theta^-},\ r_{t+1} + \gamma \ max(V_{s_{t+2}, \theta^-}, \ r_{t+2} + \gamma \ max(V_{s_{t+3}, \theta^-},\ r_{t+4} + …+V{s_{t+n}, \theta^-})…))$$
where V is value function
$$V_{s, \theta^-} = max_a Q(s_t, a_t, \theta^-) $$
We can also use weighted average of both
$$Q_y = \alpha \ Q^{nstep}_y + (1-\alpha) \ Q^{expl}_y$$

Rebound

Simplified priority sampling

Then choosing starting n-step block we make k tries and choose block with maximal variance in reward

Training
we started action of only steering angle and retrained later to steering/acceleration action
We trained net for 600k batch iterations(batch 32) for 17 tracks. Each track was cut off after 10 minutes of running. Even with 600k iterations rebound event still happens for more difficult tracks (Sometimes shadows causing problem).
For steering/acceleration we used 2 Q functions

Source code is here

Acknowledgement:
Caffe is developed by Berkeley AI Research (BAIR)/The Berkeley Vision and Learning Center (BVLC) and community contributors
Naoto Yoshida is the author of the gym torcs.
Implementation of replay buffer is based on Ben Lau work.


Come and meet Imagry at the Embedded Vision Summit in Santa Clara California, May 1-2, 2017

Imagry is proud to have an impressive presence in the Embedded Vision Summit 2017 *:

First, Adham Ghazali, Imagry’s CEO and co-founder will talk about “Edge Intelligence: Visual Reinforcement Learning for Mobile Devices”. In this talk we will tackle a major challenge of real-life visual data, which encompasses a tremendous amount of information and presents a huge challenge for the design and development of a perceptual engine.

Additionally, Imagry is proud to be selected as a finalist in the 2017 Embedded Vision Summit Vision Tank. The Vision Tank is the Embedded Vision Summit’s exciting start-up competition, in which five computer vision start-up companies present their innovations to a panel of investors and industry experts!

*       The Embedded Vision Summit is the only event focused exclusively on the hottest topic in the electronic industry today: deployable computer vision. More than 1000 engineers, executives, marketers, investors, and analysts will gather in Santa Clara, California to explore deployable computer vision technology and business opportunities in depth.


Looking for Embedded Vision solution? Meet Imagry at CES 2017

poster_imageposter_imageWe are very excitedposter_image to showcase for the first time in the legendary CES Show.

We will demonstrate in the show an innovative solution for embedded vision, that will turn any kind of device into more responsive, intelligent,  autonomous and easier to use.

Seeing is believing! Schedule a meeting with us (nitsa@imagry.co) or jump to our booth (Tech West, Sands, Hall G – 51038, Eureka Park marketplace).

See you in Las Vegas !


The Autonomy Autostrada

What are you waiting for? Use our revolutionary technology. Put your machine on the autostrada and turn it into a smart device.

Autonomous vehicles are one of the most exciting robots that we are going to use in the nearest future.  We’re all looking forward for the huge transformation that the autonomous vehicles will make on our economy, mobility and on our society.

Cloud vision solutions are relevant for many types of applications but they are inherently inadequate for portable devices due to the fact that large amounts of data have to be synchronized and analyzed in real time. To turn the autonomous devices vision into reality, the devices should be able to understand the visuals in real time and locally without the need for internet connectivity.

Imagry is building a large scale image and video understanding engines that will work in real time and locally on any kind of device, with low battery consumption. Lately we introduced a ground breaking technology that will accelerate the execution of this concept – the BIT Technology-. Such technology speeds up the runtime of conventional Deep Neural Networks 16 times, thus giving any device the “brain” to understand visual information in real time.

By providing businesses with an easy to use web embedded-engine creation tools the autostrada to autonomous devices is up and running.

Not in the far future, we will see many businesses that will join this exciting ride and will build all types of applications that will turn the 50 billion connected devices that are projected by 2020 to autonomous devices.