Measuring the value of a research paper is usually difficult. Different researchers might have different definitions of good research. However, no matter how the definition varies from people to people, there is one judgement that would be consistently applied -- research whose achievement impacts the real world is definitely good research. Today, the congestion control protocol proposed in this paper has been the default TCP algorithm for Linux and is currently being used by more than 40% of Internet servers around the world and by several tens millions Linux users for daily Internet communication. With no doubt, this paper incurs a sound impact and is definitely classified as excellent research under whatever definition. Solid work does not need to be complex work. In fact, the proposed congestion control protocol is very elegant, without any complex mathematical analysis and theorems included. It uses a cubic function as the window growth function to control the network congestions. The elegance of such design is to achieve RTT-fairness (competing flows share the same elapsed time) and intra-protocol fairness (two competing flows converge to a fair share). Besides, the design offers TCP friendliness, backward compatible with conventional TCP. No parameter adjustment are needed in practical engineering implementations. These properties offer the great advantages in real systems: a flow can be quickly recovered from the congestion, and the recovery is scalable to networks with a large dynamic ranges of BDPs. An interesting fact I noticed is that in the area of congestion control protocol design, most influential works (e.g., STCP, HSTCP, HTCP, TCP-Vegas, FAST, TCP-Westwood, TCP-Illinois, TCP-Hybla, TCP-Veno, as enumerated in this paper) are of engineering style -- attacking a practical problem via intuitive thinkings and designing instead of mathematical modelling and theoretical deriving. Though I have limited knowledge in the spectrum of congestion control, I can imagine that there must be tons of theoretical papers using queuing theory, stochastic analysis and so on to model the design problem and try to calculate to get the design. However, almost none of them finally stands out to be influential to the real system. This poses a serious question: does the theory really provide helpful guidelines in practical design? My answer is that it depends. With no doubt, theoretical work could deepen our understandings of real problem. But its utility is highly limited by the impractical assumptions usually imposed, such as independent, identical distributions, Poisson arrival rates, etc. In other words, to make the analysis doable, it compromises accuracy, which could be an essential problem in practice.