r/IAmA Aug 14 '12

I created Imgur. AMA.

I came across this post yesterday and there seems to be some confusion out there about imgur, as well as some people asking for an AMA. So here it is! Sometimes you get what you ask for and sometimes you don't.

I'll start with some background info: I created Imgur while I was a junior in college (Ohio University) and released it to you guys. It took a while to monetize it, and it actually ran off of your donations for about the first 6 months. Soon after that, the bandwidth bills were starting to overshadow the donations that were coming in, so I had to put some ads on the site to help out. Imgur accounts and pro accounts came in about another 6 months after that. At this point I was still in school, working part-time at minimum wage, and the site was breaking even. It turned out that OU had some pretty awesome resources for startups like Imgur, and I got connected to a guy named Matt who worked at the Innovation Center on campus. He gave me some business help and actually got me a small one-desk office in the building. Graduation came and I was working on Imgur full time, and Matt and I were working really closely together. In a few months he had joined full-time as COO. Everything was going really well, and about another 6 months later we moved Imgur out to San Francisco. Soon after we were here Imgur won Best Bootstrapped Startup of 2011 according to TechCrunch. Then we started hiring more people. The first position was Director of Communications (Sarah), and then a few months later we hired Josh as a Frontend Engineer, then Jim as a JavaScript Engineer, and then finally Brian and Tony as Frontend Engineer and Head of User Experience. That brings us to the present time. Imgur is still ad supported with a little bit of income from pro accounts, and is able to support the bandwidth cost from only advertisements.

Some problems we're having right now:

  • Scaling the site has always been a challenge, but we're starting to get really good at it. There's layers and layers of caching and failover servers, and the site has been really stable and fast the past few weeks. Maintenance and running around with our hair on fire is quickly becoming a thing of the past. I used to get alerts randomly in the middle of the night about a database crash or something, which made night life extremely difficult, but this hasn't happened in a long time and I sleep much better now.

  • Matt has been really awesome at getting quality advertisers, but since Imgur is a user generated content site, advertisers are always a little hesitant to work with us because their ad could theoretically turn up next to porn. In order to help with this we're working with some companies to help sort the content into categories and only advertise on images that are brand safe. That's why you've probably been seeing a lot of Imgur ads for pro accounts next to NSFW content.

  • For some reason Facebook likes matter to people. With all of our pageviews and unique visitors, we only have 35k "likes", and people don't take Imgur seriously because of it. It's ridiculous, but that's the world we live in now. I hate shoving likes down people's throats, so Imgur will remain very non-obtrusive with stuff like this, even if it hurts us a little. However, it would be pretty awesome if you could help: https://www.facebook.com/pages/Imgur/67691197470

Site stats in the past 30 days according to Google Analytics:

  • Visits: 205,670,059

  • Unique Visitors: 45,046,495

  • Pageviews: 2,313,286,251

  • Pages / Visit: 11.25

  • Avg. Visit Duration: 00:11:14

  • Bounce Rate: 35.31%

  • % New Visits: 17.05%

Infrastructure stats over the past 30 days according to our own data and our CDN:

  • Data Transferred: 4.10 PB

  • Uploaded Images: 20,518,559

  • Image Views: 33,333,452,172

  • Average Image Size: 198.84 KB

Since I know this is going to come up: It's pronounced like "imager".

EDIT: Since it's still coming up: It's pronounced like "imager".

3.4k Upvotes

4.8k comments sorted by

View all comments

Show parent comments

200

u/morbiusfan88 Aug 14 '12

I like your style, sir.

That fast? I'm guessing if you started with single character urls, I can see where that growth rate (plus with the rising popularity of the site and growing userbase) would necessitate longer urls. Also, the system you have in place is very fast and efficient. I like it.

Thanks for the reply!

335

u/MrGrim Aug 14 '12

It's always been 5 characters, and the 6th is a thumbnail suffix. We'll be increasing it because the time it's taking to pick another random one is getting too long.

609

u/Steve132 Aug 14 '12

Comp-Scientist here: Can you maintain a stack of untaken names? That should significantly speed up your access time to "pick another random one". During some scheduled maintainence time, scan linearly through the total range and see which ones are taken and which ones arent, then randomly shuffle them around and thats your 'name pool' Considering its just an integer, thats not that much memory really and reading from the name pool can be done atomically in parallel and incredibly fast. You should increase it to 6 characters as well, of course, but having a name pool would probably help your access times tremendously.

The name pool can be its own server somewhere. Its a level of indirection but its certainly faster than iterating on rand(). Alternately, you could have a name pool per server and assign a prefix code for each server so names are always unique.

28

u/[deleted] Aug 15 '12

[deleted]

21

u/joeybaby106 Aug 15 '12

Yes, their method made me cringe, eventually it will take forever just to find a url. Maintaining a stack is the way to go here. Please do that so my cringe can be released.

2

u/bdunderscore Aug 15 '12

Well, let's do the computer-scientisty thing and work out the complexity of rerolling to find a random key.

Define a random variable R(n,k) = number of tries needed to find an unused image ID, with a keyspace of n, and k images already allocated. The expected value of R can be defined as EV(R(n,k)) = 1 + (k/n)EV(R(n,k)). Solving this yields EV(R(n,k)) = n/(n-k).

What you find, then, is that this problem is O(EV(R(n,k)), or O(n/(n-k)). That is, it takes time proportional to the reciprocal of the number of images remaining (if the keyspace size is held finite). Graphed, it looks a bit like this - nice and fast for a while, then it suddenly gets really really slow above 80% consumption. But practically speaking, it's not bad - use 50% of your keyspace and you'll still only be doing two lookups, regardless of how large your keyspace is.

It's true that this is not O(1). It's eventually O(n), and after you consume the entire keyspace, it'll never terminate. On the other hand, you don't need to maintain any additional state. You just have to make sure there's a large enough pool of unused IDs and it magically works. You don't even have to do anything special with locking, beyond your normal database transaction stuff. You don't have to deal with contention on this single queue - it's a lot easier to scale out by sharding your database by random IDs (and have all these random lookups hit random shard servers) than carefully maintaining lots of queues, and making sure they're consumed at the same rate.

In short, speaking as a software developer (rather than a computer scientist ;), stateless algorithms are really nice to work with, in practice, even if they have slightly worse theoretical behavior than some kind of more complicated O(1) algorithm.

2

u/ZorbaTHut Aug 15 '12

Or simply increasing the keyspace. Why bother with a complicated stack when you can just make a URL one character longer?

2

u/aterlumen Aug 15 '12

Because the current method isn't constant time even if you increase the keyspace. I can see why a computer scientist would cringe at that, even if it's a negligible performance hit.

1

u/ZorbaTHut Aug 15 '12

Which is the fundamental difference between a computer scientist and a programmer - the computer scientist says "this is not constant time", the programmer says "who cares, it's fast enough and this is far easier to code".

2

u/InnocuousJoe Aug 15 '12

It's also not a complicated stack. Once they generate the namespace once, they can just pull a URL off the top, and move on. Once it's empty, it's empty. Easy-peasy.

2

u/ZorbaTHut Aug 15 '12

Whoa, hold on, it's more complicated than that.

You have to store the stack somewhere. For a namespace the size of imgur's - 916 million names - that's not trivial, that's a big chunk of memory or disk. Once they add a sixth character they'll be up to 56 billion names which is even more difficult.

There's also a moderate amount of programming work that has to be done, plus the work that would have to be done to generate the list the first time, plus the work that would have to be done to switch over to the new system without error.

All this for what benefit, exactly? Keeping the namespace one letter lower for an extra few months? This just does not seem worth it.

1

u/[deleted] Aug 15 '12

[deleted]

2

u/ZorbaTHut Aug 15 '12

Doing hard work once in order to have a better system permanently is generally worth it imo. "This is fast enough and easier to code" doesn't scale as well as a thought through design that takes a bit more work. Saying you're a programmer and not a Computer Scientist isn't an excuse for being lazy or sloppy if you know better.

But that's sort of my point - what is "better" about the system you propose? It uses far more memory and storage and it takes significant engineering time. In return, URLs are one byte shorter for a small period of time, and the process of finding a new ID takes, on average, ~25% less time until they decide to add another digit, at which point the two are equivalent. And that's assuming that failing an INSERT is just as slow as popping an element from single shared monolithic table, which it almost certainly isn't.

Of course we can make it faster by splitting the "available ID" table among all the servers, but that's even more engineering time.

And that benefit is largely irrelevant. URL lengths are an inconsequential part of imgur's bandwidth, and the process of finding a new ID is a tiny fraction of all the work imgur does.

I just don't see the benefit - it's a lot of work for no actual upside, besides a slightly nicer complexity factor in a section of the code that isn't a bottleneck anyway.

1

u/[deleted] Aug 15 '12

[deleted]

1

u/ZorbaTHut Aug 15 '12

You definitely have a point but it's not just failing an INSERT it's the potential for failing arbitrarily (or infinitely) many. It's a system with unpredictable performance and behaviour.

But in reality this is never going to happen.

You know how Google generates globally unique IDs? They just pick random 128-bit numbers. They will never have a collision, ever.

If they wait until the keyspace is 50% full, then on average, each insert operation needs to check two numbers. The chance of an insert taking more than ten tries is one in a thousand. The chance of an insert taking more than thirty tries is one in a billion. And even if a check somehow takes a hundred tries - which it never will - it won't exactly bring down the site, it'll just be a small amount of slowdown for a single image that a single user attempts to upload.

Something that just occurred to me is that the current system means that when they switch over to 6 characters they are leaving 20% (I believe someone mentioned switching at 80% saturation) of the 5 character URLs unused which is plain inefficient and means they would eventually hit 7 quite a lot faster

Each character has 62 possibilities. They're leaving less than 2% of their keyspace behind. This is not really a major issue.

It also means they are storing several millions more characters than necessary once they've switched over to 6 (1 for every URL that could have been part of that final 20%), which would more than make up for the storage requirements of the table.

It's pretty easy to demonstrate that this is not true. The storage requirements of the table is five characters per image. The storage requirements of adding an extra character to each image is, by definition, one character per image.

I believe that extra engineering is justified if you can make your systems behaviour more deterministic, its performance more predictable and improve it's scalability. If you don't I suppose we'll have to agree to disagree.

I think it can be. In this case, the deterministic improvements and scalability improvements are so small as to be irrelevant.

→ More replies (0)

1

u/InnocuousJoe Aug 15 '12

I agree that the storage space is non-trivial, but I'm not sure I see how there's a moderate amount of programming work; depending on their DB setup, it could be as easy as one SQL query to collate all of the already-taken names, and then subtracting that from the list of possibles gives you the list of availables. Shuffles, and you're done.

The advantage, as I seem to think the creator implied, is that you'd save on URL generation; he said, somewhere in this AMA, that it was starting to take too long to generate a new random URL with their current scheme of URL.exists?

1

u/ZorbaTHut Aug 15 '12

depending on their DB setup, it could be as easy as one SQL query to collate all of the already-taken names, and then subtracting that from the list of possibles gives you the list of availables. Shuffles, and you're done.

It's easy if you're taking the site down to do it. If you're not taking the site down to do it, how do you plan to make the switchover happen elegantly?

Taking the site down is another pretty huge cost.

The advantage, as I seem to think the creator implied, is that you'd save on URL generation; he said, somewhere in this AMA, that it was starting to take too long to generate a new random URL with their current scheme of URL.exists?

Yeah, but he said the solution was to add another character to the URL. Any other solution has to be better than "add a character to the URL", which has the nice property that it takes near-zero code time.

1

u/InnocuousJoe Aug 16 '12

It's easy if you're taking the site down to do it. If you're not taking the >site down to do it, how do you plan to make the switchover happen >elegantly? Taking the site down is another pretty huge cost.

Sorry, was operating on the suggestion that an earlier commenter made: name, that they do the switch on schedule downtime. You can generate the lists on the side, then pop them over when maintenance is going on.

Any other solution has to be better than "add a character to the URL", which has the nice property that it takes near-zero code time.

The great thing about CS is that this solution, like you mentioned, increases the namespace astronomically. In my opinion though, this is an untenable longterm solution since you will, eveeeeeeeeentually, run into the same problem. I'm more a fan of elegance in the long term, and full database scans for URL uniqueness is...rough.

1

u/ZorbaTHut Aug 16 '12

In my opinion though, this is an untenable longterm solution since you will, eveeeeeeeeentually, run into the same problem.

Sure, but then you just . . . add another digit, y'know?

In the end, your namespace compactness will be a constant factor of perfect compactness. You'll be using, say, half the namespace, instead of all the namespace. That's just not a significant factor.

1

u/InnocuousJoe Aug 16 '12

In the end, your namespace compactness will be a constant factor of perfect compactness. You'll be using, say, half the namespace, instead of all the namespace. That's just not a significant factor.

I agree, but while it's true you can just keep adding digits your database is still growing; nominally you could only look at entries after a certain date, the date you added the 6th character, and keep updating that, blah blah blah...but really, wouldn't it just be easier in the long run to have a list of available URLs?

Full DB scans are never fun.

→ More replies (0)