YouTube 系统架构

| |
[不指定 2007-12-27 18:08 | by 张宴 ]
  视频演讲:Cuong Do (YouTube/Google 的一位工程部经理)
  演讲地点:西雅图扩展性的技术研讨会

  以下为 Kyle Cordes 根据上述视频写下的文章:

  YouTube Scalability Talk

  Cuong Do of YouTube / Google recently gave a Google Tech Talk on scalability.

  I found it interesting in light of my own comments on YouTube’s 45 TB a while back.

  Here are my notes from his talk, a mix of what he said and my commentary:

  In the summer of 2006, they grew from 30 million pages per day to 100 million pages per day, in a 4 month period. (Wow! In most organizations, it takes nearly 4 months to pick out, order, install, and set up a few servers.)

  YouTube uses Apache for FastCGI serving. (I wonder if things would have been easier for them had they chosen nginx, which is apparently wonderful for FastCGI and less problematic than Lighttpd)

  YouTube is coded mostly in Python. Why? “Development speed critical”.

  They use psyco, Python -> C compiler, and also C extensions, for performance critical work.

  They use Lighttpd for serving the video itself, for a big improvement over Apache.

  Each video hosted by a “mini cluster”, which is a set of machine with the same content. This is a simple way to provide headroom (slack), so that a machine can be taken down for maintenance (or can fail) without affecting users. It also provides a form of backup.

  The most popular videos are on a CDN (Content Distribution Network) - they use external CDNs and well as Google’s CDN. Requests to their own machines are therefore tail-heavy (in the “Long Tail” sense), because the head codes to the CDN instead.

  Because of the tail-heavy load, random disks seeks are especially important (perhaps more important than caching?).

  YouTube uses simple, cheap, commodity Hardware. The more expensive the hardware, the more expensive everything else gets (support, etc.). Maintenance is mostly done with rsync, SSH, other simple, common tools.
The fun is not over: Cuong showed a recent email titled “3 days of video storage left”. There is constant work to keep up with the growth.

  Thumbnails turn out to be surprisingly hard to serve efficiently. Because there, on average, 4 thumbnails per video and many thumbnails per pages, the overall number of thumbnails per second is enormous. They use a separate group of machines to serve thumbnails, with extensive caching and OS tuning specific to this load.

  YouTube was bit by a “too many files in one dir” limit: at one point they could accept no more uploads (!!) because of this. The first fix was the usual one: split the files across many directories, and switch to another file system better suited for many small files.

  Cuong joked about “The Windows approach of scaling: restart everything”

  Lighttpd turned out to be poor for serving the thumbnails, because its main loop is a bottleneck to load files from disk; they addressed this by modifying Lighttpd to add worker threads to read from disk. This was good but still not good enough, with one thumbnail per file, because the enormous number of files was terribly slow to work with (imagine tarring up many million files).

  Their new solution for thumbnails is to use Google’s BigTable, which provides high performance for a large number of rows, fault tolerance, caching, etc. This is a nice (and rare?) example of actual synergy in an acquisition.

  YouTube uses MySQL to store metadata. Early on they hit a Linux kernel issue which prioritized the page cache higher than app data, it swapped out the app data, totally overwhelming the system. They recovered from this by removing the swap partition (while live!). This worked.

  YouTube uses Memcached.

  To scale out the database, they first used MySQL replication. Like everyone else that goes down this path, they eventually reach a point where replicating the writes to all the DBs, uses up all the capacity of the slaves. They also hit a issue with threading and replication, which they worked around with a very clever “cache primer thread” working a second or so ahead of the replication thread, prefetching the data it would need.

  As the replicate-one-DB approach faltered, they resorted to various desperate measures, such as splitting the video watching in to a separate set of replicas, intentionally allowing the non-video-serving parts of YouTube to perform badly so as to focus on serving videos.

  Their initial MySQL DB server configuration had 10 disks in a RAID10. This does not work very well, because the DB/OS can’t take advantage of the multiple disks in parallel. They moved to a set of RAID1s, appended together. In my experience, this is better, but still not great. An approach that usually works even better is to intentionally split different data on to different RAIDs: for example, a RAID for the OS / application, a RAID for the DB logs, one or more RAIDs for the DB table (uses “tablespaces” to get your #1 busiest table on separate spindles from your #2 busiest table), one or more RAID for index, etc. Big-iron Oracle installation sometimes take this approach to extremes; the same thing can be done with free DBs on free OSs also.

  In spite of all these effort, they reached a point where replication of one large DB was no longer able to keep up. Like everyone else, they figured out that the solution database partitioning in to “shards”. This spread reads and writes in to many different databases (on different servers) that are not all running each other’s writes. The result is a large performance boost, better cache locality, etc. YouTube reduced their total DB hardware by 30% in the process.

  It is important to divide users across shards by a controllable lookup mechanism, not only by a hash of the username/ID/whatever, so that you can rebalance shards incrementally.

  An interesting DMCA issue: YouTube complies with takedown requests; but sometimes the videos are cached way out on the “edge” of the network (their caches, and other people’s caches), so its hard to get a video to disappear globally right away. This sometimes angers content owners.

  Early on, YouTube leased their hardware.

  http://kylecordes.com/2007/07/12/youtube-scalability/



技术大类 » 系统架构与硬件 | 评论(51) | 引用(0) | 阅读(41415)
Clarisonic Outlet Email Homepage
2012-12-1 11:14
Good post, I sought for a long time on this article. Finally saw yes. This is what I see Good in one article. Share it. Is worth to recommend.
Clarisonic Outlet Email Homepage
2012-12-27 08:54
I think I come here to see is right, this theme or need to know. There are a lot of attractive. I will often come to. I hope I can see you. Meeting people.
Gucci Outlet Email Homepage
2012-12-27 09:00
Very meaningful an article, I've been looking for a long time about this article. Finally see yes. This is an article I see good. Share it. Is worth to recommend.
Mulberry Handbags Email Homepage
2012-12-27 09:03
Now this topic many people know. Very attractive. I will often come to. I hope I can see you. Meeting people. Yes to comment. Once again, I recommend this station.
north face clearance Email Homepage
2013-1-9 10:33
In quiet for a period of time later, it’s channel.Can not, so in north face clearance north face clearance adults in the unfaithful and disloyal.So doing, even if the court cheap north face jackets were originally considered adults is not wrong, also give birth to discount north face jackets suspicion to adults, even for fear of adults with their own guns and killed.
Karen Millen dresses Email Homepage
2013-1-31 10:24
I stumbled Karen Millen UK upon the actual result may be many and varied. For this reason, this indicates any "simple Karen Millen coat start" perhaps with all the "5 Ws & an H stand-by (Who, Exactly what, In which, Any time, Exactly why and exactly how)In . could also help you get the pattern on.Karen Millen Here will go: Who: A good Initial Notice - inform us your own purposes Precisely what, Wherever: Name page; include the title of one's organization, address, contact details Exactly why: Executive Summary (small along with special) summarizes the key parts in Karen Millen online shop your prepare How: How are you buying right now there from this level? You can then segue right into a excellent round regarding Something like 20 inquiries: What will be the Vision and also Vision Claims? The reason for operational? Exactly what are your company goals--as in what will your business seem like? Who are you wanting for alliances? Advertising and purchasers Program: What are your service?
tanglina Email Homepage
2013-7-27 15:23
[国際ネット <a href="http://www.louisvuittonshopjp2013.com/">ルイヴィトン 財布</a> ワークは、日本における温ティン特派員が報告され、日本新華中国プレス竹ネットワーク7月26日のニュースによると、日本の"トラムGEEK"悪名高い夏満員の乗客トラムは、セクハラに彼らのために最高の場所です。最近では、"手探り"のうち、若い女性の側にトラムラッシュの <a href="http://www.louisvuittonshopjp2013.com/">ルイヴィトン新作</a> 男が、相手が警官である期待していなかった、現行犯で摘発された。http://www.louisvuittonshopjp2013.com/
Stacey Summerhays Email
2017-3-7 11:57
<a href="http://www.undangancinta.com">undangan pernikahan</a>undangan pernikahan unik https://goo.gl/XZ6zpY undangan nikah<a href="http://bit.ly/2fZoVjU">jual hp asus murah</a>ꥐ̈1
分页: 3/3 第一页 上页 1 2 3 最后页
发表评论
表情
emotemotemotemotemot
emotemotemotemotemot
emotemotemotemotemot
emotemotemotemotemot
emotemotemotemotemot
打开HTML
打开UBB
打开表情
隐藏
记住我
昵称   密码   游客无需密码
网址   电邮   [注册]