The Currently-Available Mobile Web Options

The mobile space is still largely undefined and proprietary and is changing all the time.
Following up on this post, I want to discuss all the different mobile solutions available today.

Some Stats & Definitions

The mobile arena is so fragmented that I want try to wade through the terminology and base the discussion on some defined parameters:

Offline Mobile Web Apps:

Websites that are accessible via the browser, and once bookmarked on the phone are accessible whether the phone is connected to the internet or not.

Tablet vs Phone vs Desktop vs Laptop:

I’m going to group these in to two categories:

  1. Phones: I’m grouping tablets in with phones.
  2. Computers: I’m grouping laptops in with desktop computers.

(“Preposterous! Tablets and laptops should be in the same category! They’re essentially the same thing! That classification won’t include the tablet/laptop hybrids omg!!!”.) It’s not a precise division but for all intents and purposes this will suffice for this article.
This grouping is based on access and use:
Phones/Tablets are sometimes wi-fi connected, sometimes cell-service connected, and sometimes offline. In general they are used to consume content.
Desktops/Laptops are in general always connected, and used in one place. They are used to both produce and consume content.

Smartphone vs Feature Phone:

Roughly half of (U.S.) mobile users are on smartphones; the other half use feature phones. Smartphones for purposes of this post are defined as a phone that can browse online content; feature phones are everything else. We will only discuss smartphones(remember, I’m including tablets in here as well). FYI I’m only talking about half of the population of one country; this is the population that many of the clients I work with want to reach so that’s who i’m focused on here.

Accessing Content

Pretty much 100% of developers ever have sat with a client and had to respond to this statement:


Whether they’re referring to a mobile app or a native app or a facebook app or a web app or whatever else, for some reason everybody thinks they need an app. 1.Think of idea 2.Build app 3.PROFIT!!! reference

People want their customers/users to access their content via their smartphone via the internet.
Smartphone users can access content via the internet through the browser or through a native app directly.
Here are the good and bad about each:

  • Website: Just a regular website. Just a normal website that loads into a phone’s browser pretty much just like it loads into a desk-/laptop’s browser.
    • GOOD: Inexpensive – no additional effort beyond building a website is required
    • BAD: Site doesn’t work that well on mobile. Everything is shrunk to fit, you have to zoom and pan around to read anything.
  • (Mobile-Optimized) Website: same as above, but the content is formatted differently to be easier to read/use on a smaller screen.
    • GOOD: Same content delivered to desktop users, but presented so it’s easier to read on a smaller device(bigger buttons, etc).
    • BAD: Additional development time, additional IA/UX time(your site’s existing ten nav items, each w their own drop-downs doesn’t just automagically work on a teeny screen), additional design/UI time. THEN you gotta make sure it works on the androids and the iphones etc etc. All this = more $$, and then you still have technical limitations(ie you can’t have a smoothly-animating game that ppl are used to on their phones, or have access the camera functionality or access to much of the other hardware that makes the phone so cool. *note that you do have some access to things like geo-location etc, but understand you’re limited).
  • (Native/Mobile) Application: a file downloaded from the internet(usually via store/marketplace etc).
    • GOOD: Same as the Mobile-Optimized site, plus you have the ability to access all of the hardware APIs(read: 3D games, photo-editing capabilities, etc), easier to monetize. Indexed in ‘app stores’ so that’s potentially good in terms of findability, legitimacy, etc.
    • BAD: Same as the Mobile-Optimized site x10. You need a developer with a specialized skillset(ie objective-C w app-store approval experience). Must first be downloaded from an app store(iOS), and once you’re done you still only have one type of device completed.
      TODO: explain Sencha/Titanium/other write-once-deploy-everywhere hybrid apps – good in some situations but kind of end up being master of none – show best examples and how they’re not very good.
      TODO: explain html5 and how steve jobs lied! and how it doesn’t just solve mobile.

(Caveat time: Laptops can be carried around, and can be used to work offline. Keyboards can be purchased to connect to tablets, and you can awesome albums via GarageBand on a tablet.
I would 1.argue that if you’re doing these things exclusively that you’re an outlier and 2.remind you that these definitions are necessary to discuss this mobile space so I’m going with them here. The tablet/laptop hybrids would then technically share both categories…)


A web app(lication) vs a website

I want to explain what I think ‘web app’, a term I keep hearing, means:

Different from a native app downloaded from a curated ‘app store’, this term is used to describe an application that is accessed via the browser on your phone or desktop.

So what’s the difference between a web app and a website?

People often try to differentiate the two, even developers, but unfortunately the definitions just don’t work.

The best one that could maybe work is this:

Websites: Websites are sites that are primarily informational. So and would be classified as a website because you go there to get information.

Web applications: These allow the user to perform actions, more like a tool. So is a tool, you go to send/receive email. it’s an app. So is, where you go to buy things. If it’s a tool in some way then it’s classified as an app.

There are many problems with this differentiation: What if adds a tool and allows you to perform actions? Say a funny-video contest or they allow you to order online?
What if allows you to register to vote?
Seems like hotmail allows you to access information without sending/receiving mail when you go to look up old contacts or addresses.
Is amazon an app when you buy something but a site if you’re only looking up product dimensions?

This is the best classification I could find and as you can see it doesn’t hold up.

Other groupings exist(like the usage of server-side languages vs static files), but there are huge holes to be shot through any of these as well.

The term is very useful in meetings to sound fancy but ultimately:

WEB APP == WEBSITE. *(fixed hyperbolic example, would not actually run.)

(NOTE: The terms website / web app are not terms exclusive to mobile; these terms similarly describe desktop websites / web apps.)

Every developer who I’ve shown this to so far feels strongly that while it’s an extremely gray area there is indeed a difference between an app and a site. Maybe we are looking at it the wrong way; rather than grouping them, we should be placing them on a scale, from site to app, defining them based on their complexity. At some point as a website becomes more complex it becomes an app. Viewed this way, there’s no such thing as a simple web app(it’s just a website with basic functionality) and no such thing as a complex website(it’s a web application with complex functionality).

This is my larger point: if devs are arguing nuance, why are marketers attempting to distinguish between the two?
One can argue that this is just semantics, but it seems as though (non-technical)marketers’ attempts to define the technologies have overtaken the actual technology. This leads to mixed expectations in terms of budget, timing, and effectiveness. If we could simplify/clarify buzz words and focus on using technology to solve a problem, we will be able to communicate to our clients and (non-technical)team members much more effectively.


Quadcopters 101

I’ve been really into quadcopters lately (or quadrocopters or quads or multi-rotor helicopters or drones or whatever you call them) and wanted to put all my examples in one place because I keep sharing the same info over and over w different people.


For me interest in quadcopters has not come from rc flying itself; other than owning an rc glider wing I haven’t really had an interest in rc flying. I’m a paraglider pilot and that satiates any sort of flying bug I have.

I realized i wanted to participate in this when I saw this footage captured of climbers ascending the Trango Towers in Pakistan:

The quad is flying at 20,000 feet! amazing. The footage is clearly stabilized and wobbly, but holy crap it’s definitely acceptable footage and would be next-to-impossible to film otherwise, unless you could get a heli pilot in pakistan to take you up 20,000. The risk factor drops exponentially with this approach to getting footage, which is super sweet.

Buying a Quadcopter

I immediately did research and bought a mini quad to practice with. I got this one on recommendation from the forums, and it’s been awesome. super cheap, super durable – a great way to start.

Not sure why the chinese manufacturers thought a ladybug on the shell was a good idea but don’t let that stop you from getting one, they are awesome. It’s taken a beating(it ultimately broke when I was replacing the rotors, replacement parts coming), and learning to fly these accurately and consistently is trickier than I thought but I’m glad to have started on this one as it’s inexpensive.

Programmable Drones

I went to the Drone Games which happened to be going on at the Groupon offices downtown. The participants were given a Parrot AR Drone 2.0 which is open-source and programmable – the teams hacked together demos and showed them off. There were some cool people there and I learned a lot talking w the participants. It helped me understand where things are at now and where they’re going to be in the near future, like learning about matternet who plan to save the world with legions of automated flying drones:

Another on of the demos would take the on-board 720p camera footage and run it through facial-recognition software, and then post any faces it detected to twitter. Definitely cool stuff and makes you realize how awesome it is how fast things progress today.

More Awesomeness

Some time later I saw this video:

which is an amazing example of piloting skills. Some of these shots have been impossible until now. Again you see it’s been handled in post quite a bit but the final product is awesome. This guy is apparently also using a gimbal that auto-stabilizes the camera (think of the accelerometer in your phone, this zeros out whatever input it gets) similar to this:


Both the Pakistan vid and the San Francisco one are flown via rc controller, but both pilots are using fpv equipment(first-person-view) which basically gives a live video link to goggles that you wear, effectively giving you the pilot first person perspective from the camera on the quad. So awesome, but that’s down the road.

FPV glasses

Next Quad

I ordered a larger quad, the dji naza f450, with a spektrum dx6i controller which is the same one I had w my rc glider. This is adequate to carry a gopro. it’s a much larger machine though, and it’s definitely going to take some time to get used to. I got this quad because you can add on the naza gps module which gives the quad gps capability – it knows where it is and you can program in targets/courses etc. It will still have enough power to lift the fpv radio equipment when I’m ready to add that.

Spektrum DX6i Controller

dji naza f450


Nerd Alert

This hobby is a little nerdier than most things I’ve been interested in because it’s in the rc world(no offense rc world) but this may end up making rc cool!
Imagine getting a shot of the riots in Syria or rockets firing from Palestine into Israel or any other political situation that would be next to impossible to get footage otherwise. Same goes for action sports and even cinema. I have like one million shots I want to try to get. It’s been so fun envisioning the capabilities of this thing and I can’t wait to get some awesome footage.


Fish Feeder – Arduino

We were catching an early flight to go skiing last weekend – at midnight i remembered that Leo, H’s fish, was not accounted for while we were going to be gone. In the past we’ve had neighbors watch his forefather(fish died) and then had friends, but that was awkward and I 1.didn’t want to have to ask anyone and 2.didn’t have time even if i wanted to.

I’ve been consulting at an agency recently. A guy who works with me has been doing cool projects and getting me excited about stuff so I immediately thought of Arduino + servo.

Retina mbp

I hadn’t messed with the Arduino since I got my new computer so was concerned there would be issues getting up and running. Indeed it wasn’t just plug/play as there were no usb options when I fired up the IDE. This was easily fixed however. I googled and found this url for FTDI USB drivers, installed and re-opened and it worked.


I knew i’d need some sort of timer and was unfamiliar w this on Arduino  As it was so late and timer functionality was so common I assumed I could just find some code online. I was correct.

Right on the Arduino site I found a time library appropriately called Time, and was pleasantly surprised to find along with it a TimeAlarm library. Even better.

These libraries require you to set a time when the Arduino boots up and then that essentially becomes the system time.

Loaded Straw + Servo over Leo’s Bowl

Setup & Testing

I got the servo running and hooked to the timer; the servo was moving slow and so I created a shake() function to help any straggling fish food pieces to eject.

I couldn’t find any AA batteries so i ran down to 7-11 and picked some up.

Tested the code on a 10-second time, and then hoped for the best with setting it up for Leo.

Setting up the hardware was the most difficult part – I just stacked some of H’s wood blocks near Leo’s bowl and then hot-glued the servo to the top block(i didn’t want the servo to fall in the bowl and fry Leo) and then rubber-banded a drinking straw (that I had cut down/singed one end) to the servo arm.


Arduino / Breadboard Setup next to Bowl


Arduino + Breadboard + Fish Bowl


The fish lived and the straw was empty, so I guess it worked.

In the future I’ll be using this again, but I’ll have to figure out a way to have multiple feedings.

Here’s the Arduino sketch

Facebook API changes

LOL so interesting to watch how social technology is changing so quickly. it’s great.

Today Facebook announced changes in their api and are changing/axing two pretty big features:

No more (custom) passive actions

The first one is that they will “no longer approve custom actions that publish stories as people consume content” – meaning functionality like the nike app that says ‘bill started a run’ or how instagram says ‘bill uploaded a photo’ are no longer valid as passive actions(ie the user isn’t expressly sharing the action, the software is sharing the user’s actions as they interact with the app).

Passive sharing will still be published, but it must be the built-in actions:

  • Like – for any object
  • Follow – for profiles
  • Listen – for audio
  • Read – for articles
  • Watch – for video

Apps can no longer post to a user’s friends’ walls

The next change is the best: “Post to friends wall via the API generate a high levels of negative user feedback…so we are removing it from the API.”

Awesome. Read: no more Farmville postings. At least not the auto-posts; the app can still open a ‘post to your friends’ walls’ dialogue but the user must actually perform the action. This functionality probably made sense on wireframes and made sense from a logistical standpoint, but no one in a real-world social situation would ever want something like this.

From now on, if a friend posts on your wall you can be assured that it wasn’t by accident. This makes for more control and less spam.

Good changes

These changes are both steps in the right direction: unifying the actions makes for a consistent experience, and limiting what apps can do with your permission helps the user to maintain control over their voice.

People who know me know that I’m critically bullish on Facebook – the current experience is terrible, but fundamentally it’s exactly the tool needed to replicate our IRL lives online. Execution-ally Facebook has a long way to go: it’s convoluted and confusing, and a user isn’t going to speak or share if they can’t make certain who they are talking to.

Giving users a simpler experience and more importantly giving them control over their message are both steps in the right direction.

macro photos

i posted about the microscopic photography i saw at Maker Faire a couple years ago here by Rich Gibson. I saw him there again this year and spoke with him about his process. He’s developed a complex and detailed process that requires his own hardware, with hundreds and hundreds of images that are focus-stacked and composited on a computer.

i guess this subject matter can accomodate the teeny depth-of-field this process allows. Maker Faire 2011

His work is so unique and I wish there was some way to get close to it – I think it’d be cool to have a huge detailed print on your wall:

In doing a little research I found a hacky way to at least get in close to a subject without special equipment:

This basically expands the field of view through a lens and then compresses it through another; you shoot through a lens that’s backwards and sitting in front of your standard lens.

This got some interesting results. Here’s a carrot(click to enlarge):

Detail from this guy:

As you can see you get up really close to the subject which is cool, but DOF and vignetting are a problem; I cropped out 60% of this image which was unusable..

Here’s another:

this is the A and N from STANLEY on the handle:

You have to mess with the settings a ton to get it right – I put a 50mm.1.4 on the camera and then had a zoom lens reversed and held over the top of it; focusing/zooming even the slightest bit can put the whole thing out of focus as you’re dealing in millimeters.

This is a simple way to get close into an image.
People who create the spider image above do use a proper macro lens, then do the stacking process to get focus.
I imagine Rich’s process is the same in theory but on another level.


i’ve loved Jason Salavon’s work ever since I saw this:

It’s a mathematical average of every centerfold from playboy for 10 years.

Here’s another one of 114 homes for sale in the Dallas/Ft. Worth area:

I decided to try to mimic this process and average out all of my instagram images.
The images above work because they have similar compositions; instagram images all have identical aspect ratios but there is no pattern to the compositions other than maybe darkened edges…

For now I’m limited by the Instagram API (which paginates the results at 60 images) due to a bug in my code(I can’t parse the JSON when the second call is returned for some reason) so these are just the most recent 60 images.
Here’s my first attempt at a proof-of-concept – here are my 60 most recent instagram(@j_red) images averaged out:

You can see the borders from snapseed in there, and the diagonal lines that look like a huge fingerprint are from a contrasty image of tyler durden on the tv i shot in the dark.
As expected the colors average out to a contour-less image, but it’s actually pretty to look at.

Here is my wife’s account(liz_stan):

next I tried images via google’s image search.

here is the average composite of the first 200 image results for ‘eiffel tower':

the average composite of the first 200 images from the query  ‘golden gate bridge':

and this is the first 200 for ‘tree':

The images are all aligned top-left (which is why they run out of pixel data on the bottom-right) but even with vertical and horizontal images you can get a general idea of what the item is.

I’m working on an application which i’m calling ‘instagraverage’, which is a terrible name, that lets you do your own account; I was pleased with the google results as well so I may add that.



tree ring – straight

here’s the previous code in a straight line, looks like dryer lint or something.



circles in processing.js

Here’s the code:

int numberOfPoints = 360;
float angleIncrement = 360 / numberOfPoints;
float circleRadius = 200;
Item[] items;
int count = 0;
void setup() {
size(700, 700);
items = new Item[numberOfPoints];
for(int i = 0; i
items[i] = new Item(i, 0,0);
void draw() {
for (int i = 0; i < numberOfPoints; i++) {
class Item {
int id;
float x;
float y;
float s, newS;
float r, g, b;
float radius;
float newRadius;
Item(int iid, float ix, float iy) {
s = random(15);
radius = random(300)+50;
id = iid;
void update() {
x = (newRadius * cos((angleIncrement * id) * (PI / 180 )));
y = (newRadius * sin((angleIncrement * id) * (PI / 180 )));
x += width/2;
y += height/2;
fill(r, g, b, 240);
ellipse( x, y, newRadius/5, newRadius/5);
void setTargetPoint() {
newRadius += (radius-newRadius)/10;
if(abs(newRadius - radius)
radius = random(300)+50;
void newColor() {
r = random(255);
g = random(100);
b = random(10);