What better way to launch a blog than with an origin story? Our CTO Leon Mergen spins the tale of Autheos beginnings – no holds barred.
There were a few givens to begin with.
We knew that adding video to a product page on an e-commerce site is perhaps the single most effective way to drive increased sales — studies have shown sales conversion rates go up by 68%. And we knew that product video viewing data fills a gaping hole in a brand’s / e-tailer’s ability to assess the effectiveness of their online and offline marketing efforts in driving e-commerce sales.
Up to this point, we had built an okay product video distribution platform, but we also knew we couldn’t scale globally with the technology we were using. So, in September last year, we decided to transition to AWS. In the same period, we built an e-commerce marketing support tool for brands which, judging by customer response, is a game changer. But let’s back up…
The Perils of Good Fortune
Autheos was founded when the biggest online retailer in Holland and Belgium asked us to turn an existing piece of technology into a video hosting solution that would automatically find and insert product videos into their product sales pages. A startup rarely finds itself in a better starting position, so we jumped right in and began coding. In retrospect, this was a mistake for two reasons.
For one thing, we grew too fast. When you have a great client that really wants your product, the natural reaction is to build it as fast as you can. So, since there wasn’t a team in place, we (too) quickly on-boarded engineers and outsourced several components to remote development shops, which resulted in communication problems and technical incompatibilities.
More importantly, since we already had an existing piece of technology, we didn’t take the time to think about how we would be building it if we were starting from scratch. It seemed like it would be quicker to adapt it to the new requirements. And, just like a home-owner who opts for renovation instead of a complete tear-down and rebuild, we made some compromises as a result.
However, thanks to many all-nighters we managed to meet the deadline and launch a platform that allowed brands such as Philips, LEGO, L’Oreal, and Bethesda to upload product videos (commercials, guides, demos, reviews, etc.) for free and tag them with a product code and language.
The results: less work for the e-tailer (no more manual gathering of videos, decoding/encoding, hosting and matching them with the right products) and more sales. Our client convinced its brands to start uploading their videos, and kickstarted our exponential growth in doing so. Soon we had so many brands using our platform, and so many videos in our database, that nearly all major e-tailers in Benelux wanted to work with us as well (often pushed to do so by brands, who didn’t want the hassle of interfacing / integrating with many different e-tailers).
This might sound great, but remember how we built the product in a rush with legacy code? This translated into a fair amount of fire-fighting. Coupled with too many moments of disbelief as in-development features were deemed impossible by our back-end limitations, we finally decided that enough was enough. It was time to start over.
A New Beginning with AWS
Our key requirements were that we needed to scale globally seamlessly, log and process all of our data, and provide high performance access to our ever-growing database of product videos. Besides this, we needed to make sure we could ship new features and products quickly without impacting wider operations. Oh, and we wanted to be up and running with the new platform within 6 months. As the de-facto standard for web applications, the choice for AWS was an easy one. We soon realized that it wasn’t just an easy decision, it was a really smart one, too.
Elastic Transcoder was the main reason for us to decide to go with AWS. Before working with ET, we used a custom transcoding service that had been built by an outsourced company in Eastern Europe. As a result of hosting the service there on antiquated servers, the transcoding service suffered from lots of downtime and caused a lot of headaches. Elastic Transcoder allows us to forget about all these problems, and gives us stable transcoding service which we can scale on-demand.
When we moved our application servers to AWS, we also activated Amazon CloudFront. This was a no-brainer for us even though there are many other CDNs available, as CloudFront integrates unbelievably well within AWS. Essentially it just worked. With a few clicks we were able to build a transcoding pipeline that directly uploads its result to CloudFront. We make a single API call, and AWS takes care of the rest, including CDN hosting. It’s really that easy.
As we generate a huge number of log records every day, we had to make sure these were stored in a flexible and scalable environment. A regular PostgreSQL server would have worked, however, this would never have been cost-efficient at our scale. So we started running some prototypes with Amazon Redshift, the PostgreSQL compatible data warehousing solution by AWS. We set up Kinesis Firehose to stream data from our application servers to Amazon Redshift, writing it off in batches (in essence creating a full ETL process as a service), something that would have taken a major effort with a traditional webhost. Doing this outside of AWS would have taken months; with AWS we managed to set all of this up in three days.
Managing this data through data mining frameworks was the next big challenge, for which many solutions exist in market. However, Amazon has great solutions in an integrated platform that enabled us to test and implement rapidly. For batch processing we use Spark, provided by Amazon EMR. For temporary hooking into data streams – e.g. our monitoring systems – we use AWS Data Pipeline, which gives us access to the stream of data as it is generated by our application servers, comparable to what Apache Kafka would give you.
Everything we use is accessible through an SDK, which allows us to run integration tests effectively in an isolated environment. Instead of having to mock services, or setting up temporary services locally and in our CI environment, we use the AWS SDK to easily create and clean up AWS services. The flexibility and operational effectiveness this brings is incredible, as our whole production environment can be replicated in a programmable setup, in which we can simulate specific experiments. Furthermore, we catch many more problems by actually integrating all services in all automated tests, something you would otherwise only catch during manual testing / staging.
Through AWS CloudFormation and AWS CodeDeploy we seamlessly built our cloud using templates, and integrated this with our testing systems in order to support our Continuous Deployment setup. We could, of course, have used Chef or Puppet with traditional webhosts, but the key benefit in using the AWS services for this is that we have instant access to a comprehensive ecosystem of tools and features with which we can integrate (and de-integrate) as we go.
One month in, things were going so smoothly that we did something that we had never done before in the history of the company: we expanded our goals during a project without pushing out the delivery date. We always knew that we had data that could be really valuable for brands, but since our previous infrastructure made it difficult to access or work with this data, we had basically ignored it. However, when we had just finished our migration to Redshift, one of our developers read an article about the powerful combination of Redshift and Periscope. So we decided to prototype an e-commerce data analysis tool.
A smooth connection with our Redshift tables was made almost instantly, and we saw our 500+ million records visualized in a few graphs that the Periscope team prepared for us. Jaws dropped and our product manager went ahead and built an MVP. A few weeks of SQL courses, IRC spamming and nagging the Periscope support team later, and we had an alpha product.
We have shown this to a dozen major brands and the response has been all we could hope for… a classic case of the fabled product / market fit. And it would not have happened without AWS.
An example of the dashboard for one of our founding partners (a global game development company).
With a state-of-the-art platform, promising new products, and the backend infrastructure to support global viral growth we finally had a company that could attract the attention of professional investors… and within a few weeks of making our new pitch we had closed our first outside investment round.
We’ve come a long way from working with a bare-bones transcoding server, to building a scalable infrastructure and best-in-class products that are ready to take over the world!
Our very first transcoding server.
Driving viral spread to increase network effects, we’re focusing on signing up new e-tailers and brands at a rapid tempo. We’re putting finishing touches on the first version of our e-commerce data analysis product, and speccing out additional products and features for both the brands and e-tailers who already use the Autheos platform.
This chapter of our origin story comes to a close here, but we’re really just getting started. Stay tuned.
For a peek into the life and times of a startup, follow us on Twitter @autheosofficial.