Setting up a Data Warehouse with AWS Redshift and Ruby


Most startups eventually need a robust solution for storing large amounts of data for analytics. Perhaps you’re running a video app trying to understand user drop-off or you’re studying user behavior on your website like we do at Credible.

You might start with a few tables in your primary database. Soon you may create a separate web app with a nightly cron job to sync data. Before you know it, you have more data than you can handle, jobs are taking way too long, and you’re being asked to integrate data from more sources. This is where a data warehouse comes in handy. It allows your team to store and query terabytes or even petabytes of data from many sources without writing a bunch of custom code.

In the past, only big companies like Amazon had data warehouses because they were expensive, hard to setup, and time-consuming to maintain. With AWS Redshift and Ruby, we’ll show you how to setup your own simple, inexpensive, and scalable data warehouse. We’ll provide sample code that will show you to how to extract, transform, and load (ETL) data into Redshift as well as how to access the data from a Rails app.

I created a tutorial for Redshift and Ruby here and wrote about it on our new engineering blog.

Why VR is Ready but Robotics and Other Tech Aren’t


When I talk about Virtual Reality (VR), my heart rate increases, my volume climbs, and before you know it, someone’s asking me chill-the-frak-out. Oftentimes people who haven’t seen the light scoff at the notion that VR is ready for take-off. In fact, they usually start touting something else. Timing is paramount in tech with the biggest example being Webvan, which was a spectacular failure, the “biggest dot-com flop in history“. However, the idea of online grocery delivery is now a booming business (e.g. Instacart, Safeway and Costco deliveries)–it wasn’t ready back then. Here are some tech trends and why they’re not ready to take off.


I’d consider any autonomous robot like the RoombaJibo or those creepy animal-bots part of this trend. Google’s self-driving car counts and quad/hexa/octo-copers kinda count as well. Unlike web apps, which just require software, robots need both hardware and software. You’ll need a robot base like the Roomba dev kit, some computer like Raspberry Pi, and then ideally some sensors and actuators (e.g. a camera and an arm so it can fetch the beer from the fridge for me). You can leverage commercial platforms like Sphero, but these have few sensors and thus aren’t very interesting (I’ve done some ruby programming with it). Until hardware costs go down, you’ll be hard-pressed to find people writing software for it. For example, a big reason why Google Glass didn’t take off was because it cost a whopping $1500 and wasn’t even 3D (the other reason was that people who had them tended to be Glassholes). Even when the hardware becomes dirt-cheap, writing software for interacting with people and the real world is fraking tough. We have a hard enough time getting websites and mobile apps to not suck today.

Internet of Things (IoT), EdTech, and BioTech

With platforms like SmartThings and Amazon Echo, you can do awesome things like turn off your lights by saying “Alexa, turn off the living room lights”. Echo costs $180 and smart light bulbs costs at least $30–10x the cost of a normal light bulb (how many light bulbs do you have in your house?). IoT also requires good WiFi usually and we all know how good that is. IoT devices are also much more useful if they’re portable but we have the issue of battery life, which we’re familiar with because nobody’s phone lasts more than a day. As for the software, Amazon Echo, Microsoft Cortana, Apple Siri, Google Now, are all in their infancy. Heck, most of the Google Now team just quit. Education and Healthcare technology are mired in regulation and bureaucracy. That’s a common cause of tech trends taking longer than expected. Quads and self-driving cars are starting to see the same hurdles appear too. Take a look at the FAA Drone Regulations or the self-driving car laws that were introduced across the US in 2015.

VR and the Order of Things

The Oculus Rift DK2 costs $350. The first consumer version of the Rift will be ~$300-$500 and $1500 if you include a gaming PC. Keep in mind that for most of the gaming community will already have the necessary PC hardware. Samsung’s GearVR is $200 if you own one of their high-end phones and all the way down the line, you can get a Google Cardboard for less than $10. The Oculus Rift will soon be sold as a consumer product at scale in the price range of other, more mature products like Xbox One ($350), Playstation 4 ($400), iPhone ($650 off-contract), and iPad Air 2 ($500). Given that Oculus has already sold 175,000+ dev kits, we’ll be seeing VR products under Christmas trees for years to come.

For software, VR leverages the game industry. Popular game engines, the foundation of VR software, are either freemium (Unity3D, $1500 pro version) or have no upfront cost (Unreal Engine, 5% royalty). Microsoft Visual Studio, the standard IDE, is also freemium. With these tools, you can already train to be a Jedi (I’ve tried this and it’s even more awesome than it looks). Granted you need expertise to create immersive 3D worlds but much of the core technology like physics simulation and dynamic lighting are already built-in. Furthermore, with new techniques like Blueprints (Visual Scripting) for Unreal Engine allowing developers to write less if any code, it will become easier and easier to build for VR.

Unlike other tech trends, VR doesn’t have high hardware costs and an immature developer ecosystem. There are also few if any legal or ethical hurdles holding it back. VR has a solid base within the games industry, as evidenced by the recent launch of Youtube Gaming where gamers playing VR demos feature prominently on the homepage. Once the commercial hardware starts selling next year, more developers will write software for it. With more software available, more people will buy the hardware.

Tech trends don’t happen in isolation and I believe there’s a rational order of things. VR will help us simulate and test robots, drones, and anything else with high hardware costs cheaply and quickly. VR will help us harden software for self-driving cars so that lessons are learned through virtual accidents rather than real tragedies. VR changes how we develop technologies forever: software allows us to work with bits, VR allows us to work with atoms.

The Year I Got Into VR


In a few days, it will have been a year since my last post. During that time I ventured into the nascent world of Virtual Reality (VR) and I’m still there. I didn’t write anything about VR here partly because it was an obscure and often-times ridiculed topic and partly because I felt bummed that I didn’t get that web development job at Oculus. But now that Palmer Luckey (or Lucky Palmer as my wife calls him) has been on the covers of a few big magazines, VR is gaining momentum in popular culture, and I realized that I need to find my own path in VR, I’m ready to share some of my journey so far–this is only the beginning.

How to Dive into VR Head-first

I can’t remember exactly how it started (perhaps it was this Tested video) but I started digging into this VR thing that people were buzzing about. I read Read Player One and that blew my mind. I ordered a DK2 but there was no telling when it’d arrive–I just mashed the F5 key on the unofficial DK2 shipping website like everyone else. I found out you could buy the LeapMotion controller have it shipped in two days so I started tinkering–that’s how my previous post got seeded. I also pinged an Oculus recruiter on LinkedIn, had a few calls with a Mr. Cline (who I suspected was related to Ernest Cline), and got as far as a full-day onsite down in Irvine. Amazing experience, that was, though looking back I was clearly a VR newb and it definitely showed. I even titled my powerpoint presentation “Road to VR” and probably gave off a stalker vibe when I showed my code challenge that involved creating my own Oculus Share.

RowVR and Beyond

Undeterred, I started working with a friend, tried every DK2 demo, learned Unity and later Unreal Engine 4 and created a few demos. We joined the /r/oculus community, started going to SVVR meetups, created how-to videos on Youtube, built two PCs and bought a Note4+GearVR, won second place in the first VRHackathon (Health Category), got featured on the Oculus Share homepage for Picard’s Quarters, built a website to automatically track the latest versions of SDKs and engines, watched Super Bowl 49 in Altspace, got to try the Stanford VR Lab, and in meatspace, changed jobs twice. I’ll be writing some VR posts here but mostly on the RowVR Blog. I’d like to share my experiences because VR is coming and I believe its impact will be profound–right up there with solar energy, electric cars, and colonizing mars. A wise goofy young man once said:

“If you can make a platform where you can do anything, how sad would it be if nothing is worth doing.”

Controlling Unity with Leap Motion

Screen Shot 2014-09-04 at 11.59.16 AM


I’m still waiting for the Oculus DK2 to ship so I’m doing an open source project to find a more intuitive way to translate, scale, and rotate objects in the Unity. If you’ve seen the Iron Man movies or Elon Musk designing rockets with his hands, you’ll understand what I’m going for. Manipulating 3D objects using keyboard and mouse is quite limiting. If we can improve on this, it would allow us to create content for VR faster.

We’ll be using the Leap Motion, which ships in 2 days from Amazon :-). The plan is to create a custom Unity editor window that will communicate with the Leap Motion sensor. We’ll then connect the hand movements directly to the translation, scale, and rotation controls for the currently selected GameObject. Feedback is always welcome. Here’s a link to the source on Github: / or How To Choose a Web Service API

Most startups today leverage commercial SaaS services. When it comes to choosing an API, there are many choices but I tend to try the ones that end in “.io” first. For example, when looking at weather forecast APIs, I skipped over familiar names like Yahoo and Weather Channel and went straight for Better service APIs tend to have domains ending in “.io” and they have straightforward APIs, solid documentation, libraries in your favorite language and simple freemium plans. All of this usually allows you you decide within minutes whether this service will work for you. Check out my super short tutorial on using


How to Use Multiple Heroku Accounts Together

It’s fairly common to have a Heroku account for work projects and another one personal projects. However, I hardly see people use this nice plugin:

This will configure your SSH identities and easily switch your .git/config files between accounts. Hope this saves you time and headache.


Rails Google OAuth2 Tutorial

Google recently deprecated OpenID 2.0 authentication, which I used to authenticate users via Google Apps for internal projects like our Dashboard. In a couple of months, it will just stop working so I’ve been converting projects to use OAuth 2.0. Google login is pretty convenient, especially if your team is on Google Apps. The conversion process was very annoying so I hope this tutorial saves you time.

First, we’ll need to setup a new project in the Google Developers Console.


Next, enable the “Google+ API”:

google plus

Go to “APIS & AUTH > Credentials” and click “Create New Client ID”. You’ll need to configure the origins and redirect URIs for every domain you need. I’ve configured it for development and for Heroku so you can see a live demo.

client settings

You should now have a CLIENT ID and CLIENT SECRET. Let’s put them in your shell startup script so that your app can access them. We do it this way so that you don’t check in sensitive information into your source code.



Now we can run the example:

source ~/.bash_profile
cd ~/Sites
git clone
cd google_oauth2_tutorial/
bundle install
bundle exec rake db:setup
bundle exec rails s

This loads your shell startup script, grabs the source code, setups up the database and starts the app. If all went well, you should be presented with the Google Login screen. After logging in and approving the app permissions, you should see “You are logged in via OAuth 2.0 as <your email>!”.

More Details

This tutorial uses the Omniauth gem, which makes it easier to provide multiple ways for users to authenticate into your app. You specify what you want your app to allow as individual “strategies”:


Rails.application.config.middleware.use OmniAuth::Builder do
provider :google_oauth2, ENV['GOOGLE_CLIENT_ID_TUTORIAL'], ENV['GOOGLE_CLIENT_SECRET_TUTORIAL'], {scope: 'email,profile'}

Tip: If you want to use this for your Google Apps domain, simple pass an additional parameter:

provider :google_oauth2, ENV['GOOGLE_CLIENT_ID_TUTORIAL'], ENV['GOOGLE_CLIENT_SECRET_TUTORIAL'], {hd: '', scope: 'email,profile'}

The whole flow can be confusing so make sure you reference the Omniauth documentation before trying to troubleshoot. I found that if you don’t fully understand the flow, it will be very hard to debug your code. However, once you do, adding other strategies like Facebook or Twitter should be much easier.


If you’re seeing “invalid client_id”, your environment variables are probably not set correctly. You can use the “printenv” command to verify if the particular terminal tab you’re running the server in has the right variables. If not, source your shell startup script again. If you’re seeing API permission errors, you probably forgot to enable the Google+ API. Google’s documentation has more detailed information on specific errors that may help. If all else fails, clear your browser cookies for localhost.