Tue Feb 15, 2011 10:43 pm
Support is not the real problem with software routers.
I think the main problem is power consumption. Getting 10, 40 or 100 Gbps routing without packet loss on X86 even with GPU is very power hungry.
It is something for geeks, not something we can put in a Datacenter for real use.
X86 and GPUs are good for desktop computing and technical / scientific workstations. Not more.
Routing with X86 and GPUs is like using a Boeing 747 to go to the supermarket.
Try to do a Cisco 7600 routing task (1 terabit/sec) with X86 machines. You will end needing ten times more power and five times more room.
This is not terribly attractive, except if you are in love for X86 machines and you can't immagine to stop giving money to Intel or AMD.
RouterBoards are using Level 2 logic circuits, giving basic level 2 switching at wire speed without packet loss (i can confirm this as i tested it with a professional Ethernet Tester at 1 Gbps full duplex).
The next step to get efficient and very fast routing with low cost products is to develope level 3 code for logic circuits, using FPGAs or similar circuits. This need porting level 3 Linux kernel routing code to those circuits (HDL language programming, very interesting area).
Those ports should be integrated inside Linux kernel.org, so that a good support, development responsiveness and perenniality could be achieved.
Then you could have routing with 100 Gbps speed at a reasonnable price, and without enormous power consumption. You will not have this before, except perhaps with futur low cost Linksys or similar products, but a with a fraction of the functions you can have on Router OS, and without support at all.
Do not forget that a router do have generally at least two ports. Then if you have two 100 Gbps ports with full duplex streams, then you'll need 200 Gbps of total routing capability, with a total trafic bandwith of 400 Gbps.
If you have a router with ten 100 Gbps ports, then you need a total routing speed of 2 terabits / sec, without packet loss neither jitter. Packet loss and jitter are no more tolerated today in carrier class hardware because of realtime trafic like video and VoIP.
Actually the price of 100 Gbps CFP modules is about 40 000 €. And the warranty only 3 - 6 monthes. So only 10 and 40 Gbps is possible for small and medium providers.
Google for example is using 100 Gbps transceivers and most big providers are using them at least for testing in the lab.
In our country 100 Gbps ports are not really common. Most Ethernet inter providers links are 1 and 10 Gbps, even with tier one providers.
Even for a 10 Gbps ports router, a carrier class router still needs wire speed. If you have said 10 10 Gbps ports, then you need 200 Gbps routing capability, and 400 Gbps total bandwith in the internal data path of the router.
This cannot be supported by a single X86 machine.
Routerboards can do a good work with a few 100 Mbps ports, near wire speed. X86 can do a good job with a few 1 Gbps ports, near wire speed, but i would not use X86 for carrier class routing at 10 or 40 Gbps. You can forgot 100 Gbps even for a single port X86 router.
A tendancy today to lower prices is to try staying level2 in the backbone, using only a mesh of switches. But this need proprietary protocols. I've seen big compatibility problems as soon as you try to extend this architecture, putting down a full provider network very easily.
Provider Backbone suite of Ethernet protocols should allow in the near futur a better interoperability between providers at level 2, allowing circuits like with ATM networks.
Last edited by
工厂检验计划Techon Wed Feb 16, 2011 2:02 am, edited 1 time in total.