Posts – Page 2

Ceci n'est pas un Matt - Machine Learning and Generative Adversarial Networks - Part II

This post was originally written on my Coil site, which is currently my main blogging platform. On there you will also see bonus content if you are a Coil subscriber.
https://coil.com/p/hammertoe/Ceci-n-est-pas-un-Matt-Machine-Learning-and-Generative-Adversarial-Networks-Part-II/0irA1Ppib

So following on from my last post about trying to generate cartoon ducks using AI -- and accidentally producing something quite Warhol-ish, I decided to try and generate a new profile pic for myself on our intranet at work. A colleague of mine said that my current pic makes me look Mr Noodle from sesame street . No. You can't see the pic. But maybe my beard was a bit too unruly at the time, and maybe the wall behind me was a bit too bright and primary-colourish.

This is a post in my series on machine learning and artificial intelligence. You can find more posts on this topic at the main index.

So could I use the same technique of Generative Adversarial Networks (GANs) to produce a new image of "me"?

Let's recap how these networks work with a little analogy:

"So Sir, can you describe to us the person who robbed you of your wallet?"

"Yes officer, he was male, early forties, caucasian, 5'9", short brown hair, glasses, a beard and moustache"

[sketch artist works furiously]

"Like this?"

"No, the glasses were thinner, wire-framed type"

[sketch artist draws a new drawing with different glasses]

"Like this?"

"Yeah... maybe smaller nose"

[sketch artist draw a new drawing with smaller nose]

We have two neural networks, one, the generator (the sketch artist) and a second one, the discriminator (me). The first one is creating new images and the second one is trying to critique them. If the critic can't tell the difference between a 'fake' and a 'real' image, then the generator (sketch artist) has learned how to produce good likenesses of the subject.

So first, I needed a whole load of real images for the generator to feed to the discriminator in amongst its 'fake' ones to see if it could tell the difference.

Luckily Apple iPhones already have some machine learning in them to identify and categorise people. So I can easily copy 300 pictures from the last 5 years of myself from the phone to my desktop computer for processing.

I then opened them all up in Preview and very quickly and roughly cropped them to just have my face in. I discarded those that I was wearing sunglasses, or at a very odd angle to the camera.

I then fed those images into the GAN from before. And out of the gaussian noise, started to emerge a somewhat recognisable me...

Further...

So there, are definitely some likenesses there, but still in most of them I look like some apparition from a horror film.

I realised, that the images all being slightly different crops and orientations was giving the GAN a hard time. Bearing in mind this is a fairly simple network and we are far from the state of the art.

So I realised that I could use another bit of machine learning to pre-process the images. I could use a computer vision library to detect my facial features and then rotate, shift and scale the image such that at least my eyes were in the same place in each photo.

If you want to know the full technical details and code on this, I've written a separate (subscribers only) post on Using OpenCV2 to Align Images for DCGAN .

But the end result was pretty cool... and quite spooky. So here is an animated gif showing a few of the input images:

Notice how my face is slightly different size, and eyes moving about in different locations? Compare that to the aligned images:

Pretty eerie, right? Those are the same images as above, but the pre-processing AI has calculated the location of my eyes, and transformed the image such that my eyes are in the same location in each image.

So lets feed that back into the GAN and see how we do...

Not bad! There are certainly some ones there were it looks like I've been hit in the face with a shovel a few times... but actually overall it has done pretty well.

Again, what is pretty amazing is that none of these images ever existed . They are not distorted photos, but they are the result of a machine learning algorithm learning what my facial features look like and creating an entirely new and unique image of me.

So to recap, in the end there were three entirely separate machine learning / AI algorithms used in this process:

  1. The machine learning on my iPhone that had analysed all my photos and clustered the ones of the same same people together, such that I could easily find photos of me
  2. The OpenCV script used to pre-process the images that located my facial features in the photos and the rotated, scaled, and shifted the image such that my eyes were in the same place on each image
  3. The Generative Adversarial Network (GAN) that was used to generated entirely new images from the existing ones.

For coil subscribers, there is another animation showing the learning as it progressed.

Ceci n'est pas un canard - Machine Learning and Generative Adversarial Networks

An attempt to generate cartoon ducks via Generative Adversarial Networks (GANs)

Machine Learning - Reinforcement Learning

What is reinforcement learning? And how does it learn similar to humans?

Vietnamese Coffee

Vietnamese coffee is traditionally served with condensed milk. The condensed milk is placed in the bottom of a glass cup, and then a metal drip filter called phin.

What is Machine Learning / Artificial Intelligence?

So, I'm about to start a series of posts on machine learning and artificial intelligence. This set of posts will be somewhat technical but, the idea will be to introduce people to the concepts of machine learning and artificial intelligence and to de-mystify it a bit.

The Bialetti Moka

The Moka is probably one of the most iconic traditional coffee makers. Found in the kitchens all across Italy and The Middle East and further afield. It is a design that has lasted decades virtually unchanged.

Ripple xRapid Simulator

In order to try and get a feel for the potential viability of payments using Ripple's xRapid system, I thought I'd knock up a quick simulator to calculate the actual price of a transfer using xRapid to compare to the current money transfer options

Using CNNs to Predict Cryptocurrency Price Movements

A lightning talk I gave at the PyData Bristol meetup on 20th Sept 2018. This is a talk about some experiments I have been doing trying to predict cryptocurrency price movements using a type of machine learning algorithm called a Convolutional Neural Network -- the same sort of AI used by computers to be able to 'see' a cat or a dog in a photo. In this case applied to market microstructure data on a cryptocurrency orderbook.

Banks are Dead. Long Live Banks.

Just as the postal service evolved with the introduction of the internet and email, banking will need to evolved with cryptocurrencies.

Updated to OpenBSD 6.3? Found your IPv6 Broken?

With latest OpenBSD update, a change to the IPv6 auto address generation mechanism may cause your IPv6 to fail on some virtual hosts