LONDON, United Kingdom — From a perfect biker jacket that scrolls into view on Instagram to a striking floral printed top spotted in the street, the kind of visual inspiration that generates purchasing intent for fashion items is literally everywhere. But the consumer journey from inspiration to transaction — identifying, locating and ultimately buying the item of interest — has traditionally been fragmented and full of friction.
Now, visual search technology, leveraging sophisticated algorithms originally developed for security applications, promises to radically shorten the path from inspiration to transaction, allowing anyone with a smartphone to immediately identify and purchase the products they encounter — online, or in the physical world.
In recent years, a range of start-ups, including ASAP54, Snap Fashion, Slyce and Style-Eyes, have tapped the promise of visual search to launch “Shazam for fashion” apps. But so far no single service dominates the space. Each uses a slightly different approach to identify products — ASAP54, for example, augments its service with human experts and crowdsourced results — though they all share a core technology platform built upon artificial intelligence systems that mimic the way humans recognise objects.
While also based on this same fundamental technology, Cortexica — a company founded in 2009 by Dr Anil Anthony Bharath and Jeffery Ng at Imperial College London — is approaching the business of visual search a little differently. Instead of building its own consumer-facing app, like many of the players in the space, this self-described “fashion image-recognition company,” which has raised £4.5 million (about $7.5 million) in initial funding from Imperial Innovations, has set out to deliver visual search as a service. Currently, the company’s clients include e-commerce giant Zalando and fashion aggregator ShopStyle, whose product inventories the company “ingests” and matches with images found on social media.
BoF spoke with Cortexica's chief executive, Iain McCready, to discuss the evolution of visual search and its impact on the business of fashion.
BoF: What's wrong with finding fashion with traditional search?
IM: Fashion is visual and when you're searching for a particular fashion item, it's often quite difficult to describe what you're looking at. Two people searching for the same product on Google might use different terms and get completely different results. Visual search reduces friction — you just take a picture and find it. Businesses and customers both want to speed up processes online. Simply put, visual search takes less time and provides better results.
BoF: How does it work?
IM: When people look at an object, they pick up ten, maybe twenty features which enable them to identify what it is. Visual search mimics that process, finding key points in any image and matching those with product from a database that shares those points. Our tool requires 500 to 1,000 instructions to identify an item from a picture. It's a lot like matching fingerprints.
BoF: What have the results been?
IM: Compared to looking for a specific item by typing a description into Google or searching through an online store, it’s one click instead of five and this means more sales, because people drop off the more they have to click, especially if the results they are getting are not relevant.
What’s more, customers may be looking for a specific item, but we have also found that they want choice; they want to be inspired within the parameters of their search. We offer “Find Similar” technology which pools together groups of products under certain categories, offering a range of search results. Items are usually matched based on shape, colour, texture or print. And the outcome is often surprising: the most similar results are not always the most highly purchased. Often, it’s a case of, “I did like that one, but I prefer this version.”
Millennials are a particularly interesting market as they have grown up using camera phones and social media. And the other interesting thing we can offer brands is data and analytics about what people are taking photos of and sharing. It’s a great way to get immediate intel about a particular product.
BoF: Fashion items can have very subtle, but important differences that can be very hard for an algorithm to gauge; for example, the difference between a pair of Vans and a pair of Céline Vans. What are the current limitations of visual search?
IM: For a long time, image recognition systems really struggled with identifying exact shape, which created huge problems in matching shoes and accessories. We have now mastered the identification of shape automatically, so we feel we've overcome a massive hurdle in that sense. But with fashion search, it will always be very difficult to achieve exact matches due to stock availability and the sometimes unclear nature of a picture. But, again, we've found it's not always about finding the exact item.
BoF: How are companies blending visual search with human input to drive business?
IM: Fashion is so intrinsically subjective, which is why a human touch can work very well when combined with technology to speed things along. We work very closely with our clients to define the parameters of our search. They can tell us, for example, if they don't want customers to be shown a particular brand, colour or material. A lot of shopping aggregation sites have started off very successfully using human researchers. Olapic is a great example; they scour social media sites looking for pictures of items and then feed these images back onto digital product pages. They have seen a ten-fold sales uplift on products bolstered by social imagery. But as Olapic have expanded their business, they've needed help finding images more quickly and efficiently and we have been talking with them about how visual search technology can help.
BoF: How quickly will the technology advance?
IM: By the end of 2015, we aim to be able to identify every fashion item in a typical street style image. All our clients would love to use this and it's the next big project on our roadmap. We are also looking at bringing image recognition from the cloud to the actual device. It would enable us to make the process quicker by about two or three seconds, as it allows you to use the technology straight from your phone. We're working with Imagination Technologies to develop iPhones that would enable you to take a video or picture of an outfit, and have the phone match available items in real-time straight from the device, perhaps even based on your specific interests.
This interview has been edited and condensed.