On the search and recommendations team at Wayfair, we like to think of ourselves as sophisticated men and women of the world. We have speakers of French, Chinese, Hebrew, German and Persian in the crew. Some of us host couchsurfers at our swanky apartments, and then turn around and couchsurf all over Europe. Others travel to Singapore, attend music festivals in Thailand, etc., etc. But until recently, when it came to giving our German-speaking customers a decent search experience, we could fairly have been characterized as inhospitable xenophobes. Continue reading
Wayfair recently deployed Steelhead WAN optimizers in our network. We were unsatisfied with the 3 (2.5?) main suggested methods of deployment from Riverbed. So we designed our own deployment. But first, the vendor suggested methods: Continue reading
There’s storm in Wayfair! And yes, the “a” article before the word “storm” is purposely not there. When referring to “storm” at Wayfair, we do not mean a conglomerate of barometric circumstances that lead to downpours from the skies and other natural phenomena (a storm); we mean real-time computation, horizontal scalability, and system robustness. We mean bleeding edge technologies (
a storm). Wayfair’s Order Management System (OMS) team introduced storm into our ever-growing technical infrastructure in February to implement event driven processes. Continue reading
We’ve used MongoDB at Wayfair for a subset of our customer data for a while. But we’re always looking for opportunities to speed up our infrastructure and give our customers a more responsive user experience. So when we heard about a new database platform called ‘/dev/null’, we became pretty excited. We can’t post a link, because it’s in a very private beta testing phase, but we can assure you that the stealth-mode startup that’s working on it is supported by a pair of high-class Silicon Valley VCs. The technology is supposed to be too cutting-edge for stodgy Boston, so we felt pretty lucky to be included. /dev/null is web scale, we heard, and it supports sharding! The slashes in the name certainly give it an edgy feel. IMHO it’s a bold move to name it that, because of the potential for gfail (weird names doing badly in Google search) and unexpected placement in alphabetical lists. But hey, as an NYC cabbie character said in Taxi Driver, they’re way ahead of us out there in California.
Everything comes with trade-offs, and the word on the street is that /dev/null is so heavily optimized for write performance that ‘read’ reliability can be less than ideal. But who knows? Maybe that’s the balance we want for our write-heaviest workloads.
So we got out our testing tools and went to work on a bake-off. We wanted to simulate real-world conditions as much as possible, so we wrote some PHP scripts that connected to our sharded development Mongo cluster. On the /dev/null side, configuration of a cluster was pretty easy, as long as you start from a standard posix-style system.
After function-testing the PHP, we wrote a quick Apache Bench script to bake off the two systems. The results speak for themselves:
Past the upper 90%, MongoDB is a classic hockey stick. /dev/null starts fast and stays flat, out to the far horizon. Love it. I don’t know that we’re ready to switch right away: I’ll be a little uncomfortable while it’s still in beta, and I’ll have to get to the bottom of those unreliable read operations. But this is looking *very* promising.
For now, I’m going to follow our internal process for new technologies like this, which is to email around a Wayfair Technical Finding (‘WTF’) to all the senior software engineers and architects, so we can put our heads together, evaluate further, and eventually make a plan to roll this out across all our data centers.
Hat tip gar1t on xtranormal.
When our company’s co-founder encouraged all of our Engineering department to participate in the Google Glass Explorer contest, I thought about project ideas that could help people by using the unique features of this new augmented-reality technology. I remembered a project that some fellow students did during a robotics class that I took in graduate school. It used eye-tracking technology to remotely control the motors on a vehicle. After confirming that Google planned to embed eye-tracking technology in their new product, I realized this idea could work for applications such as wheelchairs.
My plan is to provide feedback about the wearer’s surroundings, including obstacles and suggested paths, and enable him or her to control the wheelchair with eye movements. The original student project used patterns of a user’s eyes being opened or closed to change between types of motion. For my project, I want to use subtle yet deliberate movements of the eye to let the user interact seamlessly with the surrounding environment. I think this technology could be life-changing for persons with disabilities. I hope that being able to work on this project with the support of the Google Glass Explorer program will help make it a reality.
I wrote up my idea, posted it on Google Plus with the #ifihadglass hashtag, and some fellow Wayfairians tweeted about it. Walter Frick of BostInno saw a tweet, did an interview with me, and then wrote an article about it. You can read the full story at these links:
Data Warehousing at Wayfair
In 2009 Wayfair’s database infrastructure was based almost entirely on Microsoft SQL Server. Our Business Intelligence team was using a SQL Server data warehouse to prepare a large amount of data for import into Analysis Services (SSAS) each day. We populated our data warehouse using transaction log shipping from production servers, which required about 3 hours of downtime on the data warehouse at midnight each night to restore the previous day’s logs. Once that was done, a series of stored procedures were kicked off by jobs that would crunch through data from several different servers to produce a star schema that could be pulled into SSAS. Wayfair was scaling rapidly, and this approach started to become painfully slow, often taking 10-16 hours to crunch through the previous day’s data.
The BI team decided to look into other solutions for data warehousing, and ultimately purchased a Netezza appliance. Netezza is essentially a fork of PostgreSQL that takes a massively parallel cluster of nodes (24 in our case) and makes them look like one database server to the client. In our tests, Netezza could crunch through our data in roughly a quarter of the time, bringing 10-16 hours down to a much more reasonable 2-4 hours. The dream of updating our data warehouse multiple times each day was starting to look feasible. The feedback loop on business decisions would become dramatically shorter, enabling us to iterate more quickly and make well informed decisions at a much faster pace. There was just one glaring problem.
Great, But How Are We Going to Get Data Into It?
As soon as the DBA team heard that the Netezza purchase had been finalized, our first question was “great, but how are we going to get data into it?” The folks at Netezza didn’t have an answer for us, but they did send us an engineer to help devise a solution. As it turned out, the problem of how to incrementally replicate large amounts of data into a data warehouse was a common one, and there were surprisingly few open source solutions. Google it, and most people will tell you that they just reload all their data every day, or that they only have inserts so they can just load the new rows each day. “Great, but what if you want incremental replication throughout the day? What if you have updates or deletes? How do you deal with schema changes?” Crickets.
The First Solution
The solution we arrived upon was to use SQL Server Change Tracking to keep track of which rows had changes on each table, and we built a replication system around that. We created stored procedures for each table that contained the commands required to use the CHANGETABLE() function to generate change sets, dump those to flat files on a network share using bcp, pipe them through dos2unix to fix the line endings, and load them into netezza using the proprietary nzload command. Over the course of a few months we came up with an elaborate series of REPLACE() functions for text fields to escape delimiters, eliminate line breaks and clean up other data anomalies that had the potential to break the nzload. The whole process was driven by SSIS packages.
This solution worked, but it was a maintenance nightmare. We frequently had to edit stored procedures when adding new columns, and we had to edit the SSIS packages to add new tables. SSIS uses GUI based programming, and the editor for it (Business Intelligence Development Studio) is extremely slow and clunky, so even making simple changes was a painful process. Adding a new table into change tracking was a 14-step process that took over an hour of DBA time, and setting up a new database took roughly 28 steps and around two days of DBA time. We also had no solution for schema changes – we needed to manually apply them to the Netezza server, and if we forgot to do so the change tracking jobs would fail.
Release Early, Then Iterate Like Hell
Over the next few years, we iterated on this solution and added a number of useful features. We got rid of the stored procedures per table and switched to a single stored procedure that used dynamic SQL instead. We created a solution for automated schema changes based off of DDL triggers. We created a single stored procedure to handle adding new tables into change tracking, turning it into a one-step process. We added features to publish a subset of a table’s columns, because Netezza had a fixed row size limit that some of our tables exceeded. We added a feature to trim the length of text fields, because large blobs of text usually aren’t needed on the data warehouse and they slowed down the process. We added logging of performance and health metrics to statsD with alerts in Tattle. We added the ability to replicate changes from sharded master databases and consolidate them into one database on the slave. We added the ability to replicate to multiple SQL Server data warehouses in addition to Netezza. We had data on our masters that was moved into archive tables when certain criteria were met, so we added a feature to apply changes to a table and its archive in one transaction on the slave to eliminate the temporary appearance of duplicate data.
Not Good Enough
Ultimately, we were still unhappy with the solution. It was too heavily based on stored procedures, functions, configuration via database tables, xp_cmdshell, and worst of all – linked servers. It was still a nightmare to set up new databases, and when wanted to make changes we had to edit the same stored procedures in 20+ different places. It was still single threaded. Worst of all, it was tightly coupled. If one slave server fell behind, the others suffered for it. It was also extremely specific to the use case of replicating data from SQL Server to either SQL Server or Netezza, and Wayfair was beginning to make much more use of open source databases like MySQL and MongoDB. In early 2012, we realized this solution wasn’t going to scale any further through iteration. We needed a redesign. We needed something fresh.
Redesigned from the ground up and inspired by the Tesla Replicator in The Prestige, Tesla was the solution to our data warehousing woes. We completely avoided stored procedures, functions, configuration tables, SSIS, dynamic SQL, xp_cmdshell and linked servers. Instead, we wrote Tesla in C# (primarily due to one incredibly useful .NET class for copying data between SQL servers) and moved all the logic into the application. Tesla is a single console application that takes care of everything we were doing with stored procedures and SSIS before. Its configuration is based on files rather than tables, which we can version control and deploy using our push tool. It’s multi-threaded and uses a configurable number of threads, allowing us to replicate the most important databases as quickly as possible. It’s completely decoupled, meaning that if one slave falls behind it doesn’t impact the others. It was also designed to be extensible to other data technologies, both as sources and destinations.
Tesla is built into a few agents such as Master and Slave. These agents are run as scheduled tasks in the scheduler of your choice, and they each have their own configuration files. They are completely decoupled and can be run on separate servers and at separate times.
The design for Tesla was inspired by LinkedIn’s article about DataBus. Specifically, the idea of a master server publishing its change sets to a relay server and the slaves polling the relay for those changes was appealing to us. It meant less load on the masters, and it also meant we could store the change sets in such a way that if a slave fell behind it would be able to get consolidated deltas to more efficiently catch up. The biggest difference between Tesla and DataBus is that we focus on batch-based change sets, rather than streaming. Batches are captured on the master as one semi-consistent view of a database at a given point in a time, reducing the chance of orphaned or incomplete data on the data warehouse. It also makes the most sense for a technology like Netezza, which is terrible at small transactions and great at large batches.
Tesla is fully open source and available on github. It currently supports SQL Server as a master, slave and relay server, and Netezza as a slave. It was designed with extensibility in mind, so we expect to add more technologies on both sides over time. We already have a slave adapter for Hive in the works. Feel free to hack away, add features, and submit pull requests!
Wayfair was invited to be a sponsor at this year’s Beanpot Hackathon (link: http://www.hackbeanpot.com/), held last week at the Microsoft NERD center in Cambridge. The concept of a hackathon is so closely related to our core values, that we jumped at the opportunity to participate. Wylie Conlon, along with others from the nuACM (link: http://acm.ccs.neu.edu/), did a great job organizing this event.
For those unfamiliar, a hackathon is a fantastic display of creativity, technical skills, team work, problem solving, and time management, all compressed into a single marathon event. The beanpot hackathon produced 17 demos, impressive considering the event only lasted about 24-hours.
As the event got underway, the dinner area was buzzing with excitement. Groups of people informally huddled together, some with a white board to their side, drawing sketches and getting feedback, others researching stuff on their laptops, everyone engaged in the discussion bouncing ideas back and forth. As different teams solidified, they moved to the main conference room to start building their project. The one theme that was consistent across all groups was passion for technology, and enthusiasm to get something ready for demo.
Most groups worked through the night, taking short naps between bursts of coding. We had some of our engineers available as mentors, although most groups seemed to be heads down and not looking for outside assistance. Near the entrance to the conference room, Wayfair setup a duck pond, available for those needing a fun distraction from their project. There was a fishing pole, and you could pull a duck from the pond to win a prize. The rubber duck also serves as a good sounding board for ideas, or debugging code when you are stuck. (link: http://en.wikipedia.org/wiki/Rubber_duck_debugging/)
By the time Saturday evening arrived, I was blown away by some of the projects that teams put together. Not only were the demos some cool application, or something that solved a problem, but the presentations were well done. In many cases, the presenters talked about their inspiration, thought process, and where they saw the idea going next. Questions from the audience were often constructive and suggested improvements.
Looking back on the event, I think one of the reasons we aligned so well with this particular event is because of the similarities to our work environment — Smart people using technology to solve problems quickly and get things done. I see that demonstrated every day in our engineering department, and it was refreshing to see so many talented students come together for an event like this. On a related note, we are hiring for summer internships in our software development group. If you were a participant at the beanpot hackathon, or this type of environment sounds good to you, please get in touch with us (link: firstname.lastname@example.org)
(contributors: Elias Y., Nishan S.)
We’ve received a few online, and in person questions like this, so i figured it was probably worth explaining in a little more detail.
On the Deployment server, we have a variety of applications that we deploy. From Windows .Net Services, Python, Classic ASP, CSS/JS and PHP to name a few.
We chose to standardize the interface to the Deployment server to make creating new code deployment clients simpler. Our Deployment server is essentially an on demand package creation and deployment system. Continue reading
Last winter we were discussing all of our upcoming projects, and what they would require for new hardware in the datacenter. Then we took a look at the space we had in our cage space at our main datacenter. Turns out, we didn’t have enough space, and the facility wouldn’t give us any more power in the current footprint we had. There was also no room to expand our cage. We had two basic options, one would have been to add additional cage space either in the same building, or even another facility and rely on cross connects or WAN connections. We weren’t wild about this approach because we knew it would come back to bite us later as we continuously fought with the concept, and had to decide which systems should be in which space. The other option was to move entirely into a bigger footprint. We opted to stay in the same facility, which made moving significantly easier, and moved to a space that is 70% larger then our old space, giving us lots of room as we grow. Another major driver in the decision to move entirely was that it afforded us the opportunity to completely redo our network infrastructure from the ground up to have a much more modular setup and finally using 10Gb everywhere in our core and aggregation layers.
Some stats on the move:
- Data migrated for NAS and SAN block storage: 161 TB
- Network cables plugged in: 798
- Physical servers moved or newly installed: 99 rack mount and 50 blades
- Physical servers decommissioned to save power and simplify our environment: 49
- VMs newly stood up or migrated: 619
It’s worth noting that the physical moves were done over the course of 2 months. Why so long? Unlike many companies that can have a weekend to bring things down, we aren’t afforded that luxury. We have customer service working in our offices 7 days a week both in the US as well as Europe, and we have our website to think about, which never closes. In fact, we were able to pull this off with only a single 4-hour outage to our storefront, and several very small outages to our internal and backend systems during weeknights throughout the project.
No matter how good your documentation is, it’s probably not good enough. Most folks documentation concentrates on break/fix and general architecture of a system, what’s installed, how it’s configured, etc. Since we drastically changed our network infrastructure, we had to re-ip every server when it was moved. We had to go through and come up with procedures for what else needed to happen when a machine suddenly had a new IP address. We use DNS for some things, but not everything, so we had to ensure that inter-related systems were also updated when we moved things.
Get business leads involved in the timeline. This sounds funny, but one of the biggest metrics in measuring the success of a project like this is the perception of the users. Since a good percentage of the systems moved had certain business units as the main “customers”, we worked with leaders from these business units to ensure we understood their use of the systems, what days or times of day were they using it the most, or if they had any concerns over off-hours operations during different times of the week. Once we had this info from many different groups, we sat down in a big room with all the engineers responsible for these systems, and came up with a calendar for the move, then got final approval for dates from the business leads. This was probably the smarted thing we did, and went a long way in helping our “customer satisfaction”.
Another thing we learned early on was to divide the work of the physical moving of equipment and the work done by the subject matter experts to make system changes and ensure things are working properly after the physical move. This freed the subject matter expert to get right to work, and not have to worry about other, non-related systems that were also being moved in the same maintenance window. How did we pull this off? Again, include everyone. We have a large Infrastructure Engineering team, 73 people as of this writing. We got everyone involved, from our frontline and IT Support groups, all the way up to directors; even Steve Conine, one of our co-founders did an overnight stint at the datacenter helping with the physical move of servers. It was an amazing team effort, and we would never have had such a smooth transition if everyone didn’t step up in a big way.
I hope these little tidbits are helpful to anyone taking on such a monumental task as moving an entire data center. As always, thanks for reading.
At Wayfair, we are working on a next generation of systems to power our business. The decade old systems that currently keep us running in stride have allowed Wayfair.com to vault from nothing to where it is today. But as with all systems, they have started to show their age. Continue reading