Setting up a Data Warehouse with AWS Redshift and Ruby


Most startups eventually need a robust solution for storing large amounts of data for analytics. Perhaps you’re running a video app trying to understand user drop-off or you’re studying user behavior on your website like we do at Credible.

You might start with a few tables in your primary database. Soon you may create a separate web app with a nightly cron job to sync data. Before you know it, you have more data than you can handle, jobs are taking way too long, and you’re being asked to integrate data from more sources. This is where a data warehouse comes in handy. It allows your team to store and query terabytes or even petabytes of data from many sources without writing a bunch of custom code.

In the past, only big companies like Amazon had data warehouses because they were expensive, hard to setup, and time-consuming to maintain. With AWS Redshift and Ruby, we’ll show you how to setup your own simple, inexpensive, and scalable data warehouse. We’ll provide sample code that will show you to how to extract, transform, and load (ETL) data into Redshift as well as how to access the data from a Rails app.

I created a tutorial for Redshift and Ruby here and wrote about it on our new engineering blog.

Why VR is Ready but Robotics and Other Tech Aren’t


When I talk about Virtual Reality (VR), my heart rate increases, my volume climbs, and before you know it, someone’s asking me chill-the-frak-out. Oftentimes people who haven’t seen the light scoff at the notion that VR is ready for take-off. In fact, they usually start touting something else. Timing is paramount in tech with the biggest example being Webvan, which was a spectacular failure, the “biggest dot-com flop in history“. However, the idea of online grocery delivery is now a booming business (e.g. Instacart, Safeway and Costco deliveries)–it wasn’t ready back then. Here are some tech trends and why they’re not ready to take off.


I’d consider any autonomous robot like the RoombaJibo or those creepy animal-bots part of this trend. Google’s self-driving car counts and quad/hexa/octo-copers kinda count as well. Unlike web apps, which just require software, robots need both hardware and software. You’ll need a robot base like the Roomba dev kit, some computer like Raspberry Pi, and then ideally some sensors and actuators (e.g. a camera and an arm so it can fetch the beer from the fridge for me). You can leverage commercial platforms like Sphero, but these have few sensors and thus aren’t very interesting (I’ve done some ruby programming with it). Until hardware costs go down, you’ll be hard-pressed to find people writing software for it. For example, a big reason why Google Glass didn’t take off was because it cost a whopping $1500 and wasn’t even 3D (the other reason was that people who had them tended to be Glassholes). Even when the hardware becomes dirt-cheap, writing software for interacting with people and the real world is fraking tough. We have a hard enough time getting websites and mobile apps to not suck today.

Internet of Things (IoT), EdTech, and BioTech

With platforms like SmartThings and Amazon Echo, you can do awesome things like turn off your lights by saying “Alexa, turn off the living room lights”. Echo costs $180 and smart light bulbs costs at least $30–10x the cost of a normal light bulb (how many light bulbs do you have in your house?). IoT also requires good WiFi usually and we all know how good that is. IoT devices are also much more useful if they’re portable but we have the issue of battery life, which we’re familiar with because nobody’s phone lasts more than a day. As for the software, Amazon Echo, Microsoft Cortana, Apple Siri, Google Now, are all in their infancy. Heck, most of the Google Now team just quit. Education and Healthcare technology are mired in regulation and bureaucracy. That’s a common cause of tech trends taking longer than expected. Quads and self-driving cars are starting to see the same hurdles appear too. Take a look at the FAA Drone Regulations or the self-driving car laws that were introduced across the US in 2015.

VR and the Order of Things

The Oculus Rift DK2 costs $350. The first consumer version of the Rift will be ~$300-$500 and $1500 if you include a gaming PC. Keep in mind that for most of the gaming community will already have the necessary PC hardware. Samsung’s GearVR is $200 if you own one of their high-end phones and all the way down the line, you can get a Google Cardboard for less than $10. The Oculus Rift will soon be sold as a consumer product at scale in the price range of other, more mature products like Xbox One ($350), Playstation 4 ($400), iPhone ($650 off-contract), and iPad Air 2 ($500). Given that Oculus has already sold 175,000+ dev kits, we’ll be seeing VR products under Christmas trees for years to come.

For software, VR leverages the game industry. Popular game engines, the foundation of VR software, are either freemium (Unity3D, $1500 pro version) or have no upfront cost (Unreal Engine, 5% royalty). Microsoft Visual Studio, the standard IDE, is also freemium. With these tools, you can already train to be a Jedi (I’ve tried this and it’s even more awesome than it looks). Granted you need expertise to create immersive 3D worlds but much of the core technology like physics simulation and dynamic lighting are already built-in. Furthermore, with new techniques like Blueprints (Visual Scripting) for Unreal Engine allowing developers to write less if any code, it will become easier and easier to build for VR.

Unlike other tech trends, VR doesn’t have high hardware costs and an immature developer ecosystem. There are also few if any legal or ethical hurdles holding it back. VR has a solid base within the games industry, as evidenced by the recent launch of Youtube Gaming where gamers playing VR demos feature prominently on the homepage. Once the commercial hardware starts selling next year, more developers will write software for it. With more software available, more people will buy the hardware.

Tech trends don’t happen in isolation and I believe there’s a rational order of things. VR will help us simulate and test robots, drones, and anything else with high hardware costs cheaply and quickly. VR will help us harden software for self-driving cars so that lessons are learned through virtual accidents rather than real tragedies. VR changes how we develop technologies forever: software allows us to work with bits, VR allows us to work with atoms.

The Year I Got Into VR


In a few days, it will have been a year since my last post. During that time I ventured into the nascent world of Virtual Reality (VR) and I’m still there. I didn’t write anything about VR here partly because it was an obscure and often-times ridiculed topic and partly because I felt bummed that I didn’t get that web development job at Oculus. But now that Palmer Luckey (or Lucky Palmer as my wife calls him) has been on the covers of a few big magazines, VR is gaining momentum in popular culture, and I realized that I need to find my own path in VR, I’m ready to share some of my journey so far–this is only the beginning.

How to Dive into VR Head-first

I can’t remember exactly how it started (perhaps it was this Tested video) but I started digging into this VR thing that people were buzzing about. I read Read Player One and that blew my mind. I ordered a DK2 but there was no telling when it’d arrive–I just mashed the F5 key on the unofficial DK2 shipping website like everyone else. I found out you could buy the LeapMotion controller have it shipped in two days so I started tinkering–that’s how my previous post got seeded. I also pinged an Oculus recruiter on LinkedIn, had a few calls with a Mr. Cline (who I suspected was related to Ernest Cline), and got as far as a full-day onsite down in Irvine. Amazing experience, that was, though looking back I was clearly a VR newb and it definitely showed. I even titled my powerpoint presentation “Road to VR” and probably gave off a stalker vibe when I showed my code challenge that involved creating my own Oculus Share.

RowVR and Beyond

Undeterred, I started working with a friend, tried every DK2 demo, learned Unity and later Unreal Engine 4 and created a few demos. We joined the /r/oculus community, started going to SVVR meetups, created how-to videos on Youtube, built two PCs and bought a Note4+GearVR, won second place in the first VRHackathon (Health Category), got featured on the Oculus Share homepage for Picard’s Quarters, built a website to automatically track the latest versions of SDKs and engines, watched Super Bowl 49 in Altspace, got to try the Stanford VR Lab, and in meatspace, changed jobs twice. I’ll be writing some VR posts here but mostly on the RowVR Blog. I’d like to share my experiences because VR is coming and I believe its impact will be profound–right up there with solar energy, electric cars, and colonizing mars. A wise goofy young man once said:

“If you can make a platform where you can do anything, how sad would it be if nothing is worth doing.”

Controlling Unity with Leap Motion

Screen Shot 2014-09-04 at 11.59.16 AM


I’m still waiting for the Oculus DK2 to ship so I’m doing an open source project to find a more intuitive way to translate, scale, and rotate objects in the Unity. If you’ve seen the Iron Man movies or Elon Musk designing rockets with his hands, you’ll understand what I’m going for. Manipulating 3D objects using keyboard and mouse is quite limiting. If we can improve on this, it would allow us to create content for VR faster.

We’ll be using the Leap Motion, which ships in 2 days from Amazon :-). The plan is to create a custom Unity editor window that will communicate with the Leap Motion sensor. We’ll then connect the hand movements directly to the translation, scale, and rotation controls for the currently selected GameObject. Feedback is always welcome. Here’s a link to the source on Github: / or How To Choose a Web Service API

Most startups today leverage commercial SaaS services. When it comes to choosing an API, there are many choices but I tend to try the ones that end in “.io” first. For example, when looking at weather forecast APIs, I skipped over familiar names like Yahoo and Weather Channel and went straight for Better service APIs tend to have domains ending in “.io” and they have straightforward APIs, solid documentation, libraries in your favorite language and simple freemium plans. All of this usually allows you you decide within minutes whether this service will work for you. Check out my super short tutorial on using


Rails Google OAuth2 Tutorial

Google recently deprecated OpenID 2.0 authentication, which I used to authenticate users via Google Apps for internal projects like our Dashboard. In a couple of months, it will just stop working so I’ve been converting projects to use OAuth 2.0. Google login is pretty convenient, especially if your team is on Google Apps. The conversion process was very annoying so I hope this tutorial saves you time.

First, we’ll need to setup a new project in the Google Developers Console.


Next, enable the “Google+ API”:

google plus

Go to “APIS & AUTH > Credentials” and click “Create New Client ID”. You’ll need to configure the origins and redirect URIs for every domain you need. I’ve configured it for development and for Heroku so you can see a live demo.

client settings

You should now have a CLIENT ID and CLIENT SECRET. Let’s put them in your shell startup script so that your app can access them. We do it this way so that you don’t check in sensitive information into your source code.



Now we can run the example:

source ~/.bash_profile
cd ~/Sites
git clone
cd google_oauth2_tutorial/
bundle install
bundle exec rake db:setup
bundle exec rails s

This loads your shell startup script, grabs the source code, setups up the database and starts the app. If all went well, you should be presented with the Google Login screen. After logging in and approving the app permissions, you should see “You are logged in via OAuth 2.0 as <your email>!”.

More Details

This tutorial uses the Omniauth gem, which makes it easier to provide multiple ways for users to authenticate into your app. You specify what you want your app to allow as individual “strategies”:


Rails.application.config.middleware.use OmniAuth::Builder do
provider :google_oauth2, ENV['GOOGLE_CLIENT_ID_TUTORIAL'], ENV['GOOGLE_CLIENT_SECRET_TUTORIAL'], {scope: 'email,profile'}

Tip: If you want to use this for your Google Apps domain, simple pass an additional parameter:

provider :google_oauth2, ENV['GOOGLE_CLIENT_ID_TUTORIAL'], ENV['GOOGLE_CLIENT_SECRET_TUTORIAL'], {hd: '', scope: 'email,profile'}

The whole flow can be confusing so make sure you reference the Omniauth documentation before trying to troubleshoot. I found that if you don’t fully understand the flow, it will be very hard to debug your code. However, once you do, adding other strategies like Facebook or Twitter should be much easier.


If you’re seeing “invalid client_id”, your environment variables are probably not set correctly. You can use the “printenv” command to verify if the particular terminal tab you’re running the server in has the right variables. If not, source your shell startup script again. If you’re seeing API permission errors, you probably forgot to enable the Google+ API. Google’s documentation has more detailed information on specific errors that may help. If all else fails, clear your browser cookies for localhost.


Startup Landscapes


Diagrams like this one for the Crowdsourcing space can be very help for entrepreneurs. However, it’s frustrating when you can’t click on the company logos and you’re always wondering how out-of-date it is. That’s why I created a project called Startup Landscapes. You can click on each logo and the grouping is a little more organized so you can see it in a table view. I’ve played around with other visualizations like circle packing, but it’s more confusing then helpful at this point. I’ve only added a couple of the most popular diagrams out there. LUMA makes them for quite a few industries so check those out. Don’t miss the Robotics one.

Why I Chose iPad Mini Over iPad Air


Most reviews start by telling you about specs. It’s more useful to understand personal behavior.

Watching Videos

When you’re lying in bed on your side, the weight of the iPad isn’t as big of an issue because most likely the edge is resting on the bed so you’re just keeping it from falling over. Like many people, I watch lots of videos—NBA highlights, game walkthroughs, movies, etc. I also have NBA League Pass so I watch full games on demand with scrubbing capability—it’s awesome. I found that the Full is too big for this position so I’m either extending my arms out uncomfortably or I’m in windowed mode. You do this because if you hold it too close, everything looks gigantic and your eyes get tired darting around the large field of view. With a Mini, I can go full screen and watch J-Lin slice through the lane at just the right distance.


The Mini is a paperback and the Full is a hardcover. Honestly, I prefer reading on my iPhone 5S over the Full. I’m not sure if it’s the longer travel distance for my eyes from side to side or not being as easy to handle while shifting positions or just feeling self-conscious—people who read with a Full look silly, almost as silly as people who use iPads as video cameras. Whereas before I’d take one look at my Full and then turn on my phone to read, now I read a lot more and on the Mini. For reading, I would actually prefer something lighter and narrower (uh oh, starting to describe a Kindle) but it’s good enough.

Taking Notes

I take a lot of notes because my brain needs Evernote. I’ve been searching for a long time for a good way to get me past my laziness. I had a brick of a convertible Toshiba tablet with OneNote back in the day, I’ve used Penultimate with a Full plus stylus, I’ve even tried a real world Moleskin notebooks because they feel so good. In all these cases, I couldn’t reliably get typed text at the end of the day. Handwriting recognition is never perfect and transcribing real world notebooks is a pain in the arse. The Mini solves this problem. In meetings, bringing your laptop is bad for attention because you usually lose eye contact not to mention the disrespect. Thumbing away on your phone usually makes people think you’re playing Candy Crush. With the Mini, I feel like Data from TNG and all my ideas get recorded.

I should have bought this last year even without the retina display. It feels better when watching, reading, taking notes, sketching—mostly because this is the right size for me. Think about how you would use it…and then go buy a Mini either way. I guarantee you’ll understand your personal behavior before those 30 days are up.

Read the Bad Reviews and Ignore the Good Ones

nose sucker


The holiday shopping season has started so I wanted to share some advice to fellow buyers. This applies whether you’re buying a $5 toy or a $50k cars.

Good Reviews Are Bad for You

Beware of reviews like “great product” and “would totally buy again”. Most startups you work for will ask their employees give good ratings for their own products whether it’s electronics or iPad apps. That’s just common sense. Why wouldn’t you ask employees, friends, family, and the homeless guy on the street corner to give you a good rating? Every bit helps right? People tend to quietly accept this type of ethical transgression and not the other kind: writing a bad review for a competitor. You can get sued for libel right? Good reviews tend to be shorter as well because the marketing material already covered the bases so it’s hard to come up with something nuanced to rave about. For example, when looking for a “nose sucker” to remove boogers from my baby’s nose (true story), I found this useless 5-star review. The subject is “it works” and the user “HappyDays” admits that “we never tried other more traditional aspirators so I can’t compare it to those”. It’s reviews like this that helped create helpfulness ratings.

Bad Reviews Are Good for you

On the other hand, I love bad reviews. When I’m on Amazon, I look at reviews starting with the most scathing first. The longer and nastier the review the better. That’s because bad reviews tend to get to the heart of the problem. The customer likely ran into a shortcoming of the product and like stepping on dog poo, wanted to warn the rest of the us of the stinking pile. In many cases, I can quickly decide whether the negative review is warranted and whether it affects my decision. For the same product, I came across this 1-star review. Her complaint was that using this made her ill because you are basically sucking the germs into your mouth. I thought about this for a second and decided that I’d be fine with this outside risk since my germs are probably more dangerous to the baby than hers to me. Other times, the reasons are legitimate but it doesn’t apply to you. For example, another customer may not like the heating system of a car but you live in Las Vegas so you only care about the air conditioning.

What to Look For

Start by asking whether the problem is a legitimate concern to you. If the “defect” doesn’t apply to you, move on. If it does, be extra sensitive to it. For example, if a customer suggests that a baby crib’s construction is shoddy, take it as a red flag and look for similar reports in other reviews. You don’t want to take any chances. Next, try to determine whether the review was influenced by emotion. If someone is screaming in all caps, “OMG, THIS IS THE WORST PRODUCT IN THE WORLD”, it’s probably less credible than someone writing “this product doesn’t feel safe because pieces came off after daily use of 1-2 hours”. At the end of the day, bad reviews are harder to come by because you’re counting on people to do a solid for the community and it’s easiest to be lazy. There are also cases like Yelp’s where companies take bribes to remove or hide bad reviews (rumored). Just remember that all reviews are biased and that bad reviews are more likely to be helpful.