Ikhlaq Sidhu, Tal Lavian, Victoria Howell – University of California, Berkeley. Accepted paper for 2015 ASEE Annual Conference and Exposition. June 2015.
Our purpose is to develop new models that define the advanced development & corporate research approaches of modern global high tech firms. While the world has moved on from Bell Labs’ famous advanced research model, visionary and farsighted technology-driven innovation is still vital to many of today’s most successful global technology companies. Corporate innovation strategies are implemented through research laboratories, academic collaborations, advanced technology groups, standards groups, CTO office prototypes, internal/external incubations, and open innovations. Unlike the wellunderstood nature of short-term product development, long time frames, fuzzily defined goals, and unclear measures of success lead to uncertainty of how to best run and fund advanced technology and applied corporate research. While all firms agree that advanced research is vital, their measures and processes differ widely. To identify modern models of effective advanced research approaches, the context in which such approaches are most effective, and the metrics by which they should be evaluated, we interviewed leaders at various successful and established global firms such as Cisco, Intel, Google, and others. We used the data collected to inductively arrive at six models that characterize modern advanced research approaches. The approaches of these models were different in the sense that some rely on academic and industry collaboration while others revolve around disrupting the status quo. The fact that the companies included in this study were successful means that all the models reflect a useful approach to advanced research. Therefore, no single model should be considered as better or ideal than the other. The models could be of use to a company trying to create an appropriate advanced research approach based on its goals and needs. Similarly, these models could help a company fine tune its existing R&D approach as its goals and identity develop over time. The models we present here provide useful terminology and will serve as backbone for further study of advanced development & corporate research approaches.
Determann L.; Berkeley Technology Law Journal. Volume 21, Issue 4, Fall 2006.
(Lavian T. contributor to the technical section).
Companies have been fighting about software interoperability and substitutability for decades. The battles have usually involved wholesale copying and significant modifications of code to achieve compatibility, and the law seems fairly settled in this respect. More recently, however, software developers and users alike have started to wake up to potential problems regarding combinations of separate programs, particularly in connection with open source software. Fear, uncertainty and doubt (“FUD”) prevail in all quarters and have become a prominent topic in the computer lawyer community.
This Article begins with a brief introduction to the issue and its context (I), examines the relevant copyright law principles in general (II) and the application of copyright law to software in particular (III), goes on to illustrate the classification of software combinations under copyright law in a few common technical and commercial scenarios (IV), and addresses the practical implications in the context of commercial (V) and open source licensing (VI), which is especially timely in light of the current debate surrounding the update of the General Public License (GPL). The article concludes that most forms of software combinations are less dangerous than commonly assumed, because they do not constitute derivative works (but instead either compilations or sui generis aggregations outside the scope of the copyright owner’s exclusive rights), and a number of statutes and legal doctrines significantly limit a copyright owner’s ability to contractually prohibit software combinations that do not also constitute derivative works under copyright law.
Gommans L.; Van Oudenaarde B.; Dijkstra F.; De Laat C.; Lavian T.; Monga I.; Taal A.; Travostino F.; Wan A.; IEEE Communications Magazine, vol. 44, no. 3, March 2006, pp. 100-106.
We realize an open, programmable paradigm for application-driven network control by way of a novel network plane – the “service plane” – layered above legacy networks. The service plane bridges domains, establishes trust, and exposes control to credited users/applications while preventing unauthorized access and resource theft. The authentication, authorization, and accounting subsystem and the dynamic resource allocation controller are the two defining building blocks of our service plane. In concert, they act upon an interconnection request or a restoration request according to application requirements, security credentials, and domain-resident policy. We have experimented with such service plane in an optical, large-scale testbed featuring two hubs (NetherLight in Amsterdam, StarLight in Chicago) and attached network clouds, each representing an independent domain. The dynamic interconnection of the heterogeneous domains occurred at Layer 1. The interconnections ultimately resulted in an optical end-to-end path (lightpath) for use by the requesting grid application.
Tal Lavian, Randy H. Katz; Doctoral Thesis, University of California at Berkeley. January 2006.
The practice of science experienced a number of paradigm shifts in the 20th century, including the growth of large geographically dispersed teams and the use of simulations and computational science as a third branch, complementing theory and laboratory experiments. The recent exponential growth in network capacity, brought about by the rapid development of agile optical transport, is resulting in another such shift as the 21st century progresses. Essential to this new branch of e-Science applications is the capability of transferring immense amounts of data: dozens and hundreds of TeraBytes and even PetaBytes.
The invention of the transistor in 1947 at Bell Labs was the triggering event that led to the technology revolution of the 20th century. The completion of the Human Genome Project (HGP) in 2003 was the triggering event for the life science revolution of the 21st century. The understanding of the genome, DNA, proteins, and enzymes is prerequisite to modifying their properties and the advancement of systematic biology. Grid Computing has become the fundamental platform to conduct this e-Science research. Vast increases in data generation by e-Science applications, along with advances in computation, storage and communication, affect the nature of scientific research. During this decade, crossing the “Peta” line is expected: Petabyte in data size, Petaflop in CPU processing, and Petabit/s in network bandwidth.
Numerous challenges arise from a network with a capacity millions of times greater than the public Internet. Currently, the distribution of large amounts of data is restricted by the inherent bottleneck nature of today”s public Internet architecture, which employs packet switching technologies. Bandwidth limitations of the Internet inhibit the advancement and utilization of new e-Science applications in Grid Computing. These emerging e-Science applications are evolving in data centers and clusters; however, the potential capability of a globally distributed system over long distances is yet to be realized. Today’s network orchestration of resources and services is done manually via multi-party conference calls, emails, yellow sticky notes, and reminder communications, all of which rely on human interaction to get results. The work in this thesis automates the orchestration of networks with other resources, better utilizing all resources in a time efficient manner. Automation allows for a vastly more comprehensive use of all components and removes human limitations from the process. We demonstrated automatic Lambda setting-up and tearing-down as part of application servers over MEMs testbed in Chicago metro area in a matter of seconds; and across domains, over transatlantic links in around minute.
The main goal of this thesis is to build a new grid-computing paradigm that fully harnesses the available communication infrastructure. An optical network functions as the third leg in orchestration with computation and storage. This tripod architecture becomes the foundation of global distribution of vast amounts of data in emerging e-Science applications.
A key investigation area of this thesis is the fundamental technologies that allow e-Science applications in Grid Virtual Organization (VO) to access abundant optical bandwidth through the new technology of Lambda on demand. This technology provides essential networking fundamentals that are presently missing from the Grid Computing environment. Further, this technology overcomes current bandwidth limitations, making VO a reality and consequentially removing some basic limitations to the growth of this new big science branch.
In this thesis, the Lambda Data Grid provides the knowledge plane that allows e-Science applications to transfer enormous amounts of data over a dedicated Lightpath, resulting in the true viability of global VO. This enhances science research by allowing large distributed teams to work efficiently, utilizing simulations and computational science as a third branch of research.
Hoang D.B.; T. Lavian; The 4th Workshop on the Internet, Telecommunications and Signal Processing, WITSP 2005, December 19-21, 2005, Sunshine Coast, Australia.
Circuit switching and packet switching have been developed to achieve statistical gain in sharing transmission bandwidth of a “passive” transport network whereby voice and data are transported end-to-end without content modifications by the network. This paper promotes a radical switching technology that enables the network to transport as well as process/transform its contents. in this paper Ire propose “information switching” as a technology for the juture generation internet that embeds networks with intelligence that is necessary to build trulv cognitive information processing systems. Bv “cognitive information processing” we mean that network elements can intelligently and selectively deliver relevant, filtered, pre-processed in/ormation to the desired destinations. Masses of raw data can be processed and primed, on-the-move to its destination, by the network into a form that is suitable for human interaction and decision. A plausible information switching architecture that makes use of advances in in/ormation, computer, and communication technologies is also presented.
George Clapp, Tiziana Ferrari, Doan B. Hoang, Gigi Karmous-Edwards, Tal Lavian, Mark J. Leese, Paul Mealor, Inder Monga, Volker Sander, Franco Travostino, Global Grid Forum(GGF).
Network services are services that specialize in the handling of network-related or network-resident resources. Examples of network services are data transport service, network advance reservation service, network Quality of Service (QoS) service, network information service, network monitoring service, and AAA1 service.
This informational draft describes how several network services combine and yield a rich mediation function-a resource manager-between grid applications and legacy networks. Complements of these services, the network resource is seen joining CPU and storage as a first-class, grid-managed resource (and handled, as such, by a community scheduler, or other OGSA services).
A network service is further labeled as a Grid network service whenever the service has roles and/or interfaces that are deemed to be specific to a grid infrastructure. The three dominant foci of this GHPN effort are a) the relationship between network services and the known elements of grid infrastructure, b) the functional characterization of each grid network service, and c) the interplay among grid network services. The definition of any particular grid network service (e.g., in terms of actual portTypes) is out of scope. The breadth exercise captured by this document is meant to spawn depth work around several grid network services, resulting in standard-track documents homed in either existing working groups or new working groups within the GGF.
Allcock B.; Arnaud B.; Lavian T.; Papadopoulos P.B.; Hasan M.Z.; Kaplow W.; IEEE Hot Interconnects at Stanford University 2005, pp. 89-90.
Grid computing is an attempt to make computing work like the power grid. When you run a job, you shouldn’t know or care where it runs, so long as it gets done within your constraints (including security). However, in attempting to accomplish this, Grid researchers are presenting network access patterns and loads different from what has been typical of Internet traffic. MPI applications are looking for latency critical, bursty, small message traffic, some applications are producing data sets in the 100’s of GBs and even Terabytes that need to be moved quickly and efficiently, or you might need remote control of earthquake shake tables and thus require constant jitter. Grid researchers are asking for finer grained control of the network, dynamic optical routes, allowing user apps (via middleware) to alter router configurations, etc. For some network operators, this sounds like their worst nightmare come true. For the network HW vendors, this presents challenges to say the least. This panel is intended to bring together Grid researchers, network operators, and network HW vendors to discuss what the Grid researchers want and why, what impact that will have on network operations, and what challenges it will bring for the future HW designs.
Travostino F.; Keates R.; Lavian T.; Monga I.; Schofield B.; Nortel Technical Journal, February 2005, pp. 23-26.
Intelligent networking and the ability for applications to more effectively use all of the network’s capability, rather than just the transport “pipe,” have been elusive. Until now. Nortel has developed a proof-of-concept
software capability – service-mediation “middleware” called the Dynamic Resource Allocation Controller (DRAC) – that runs on any Java platform and opens up the network to applications with proper credentials, making available all of the properties of a converged network, including service topology, time-of-day reservations, and interdomain connectivity options. With a more open network, applications can directly provision and invoke services, with no need for operator involvement or point-and-click sessions. In its first real-world demonstrations in large research networks, DRAC is showing it can improve user satisfaction while reducing network operations and investment costs.
DWDM-RAM: An Architecture for Data Intensive Service Enabled by Next Generation Dynamic Optical Networks
Hoang D.B.; Cohen H.; Cutrell D.; Figueira S.; Lavian T.; Mambretti J.; Monga I.; Naiksatam S.; Travostino F.; Proceedings IEEE Globecom 2004, Workshop on High-Performance Global Grid Networks, Houston, 29 Nov.-3 Dec. 2004, pp. 400 – 409.
An architecture is proposed for data-intensive services enabled by next generation dynamic optical networks. The architecture supports new data communication services that allow for coordinating extremely large sets of distributed data. The architecture allows for novel features including algorithms for optimizing and scheduling data transfers, methods for allocating and scheduling network resources, and an intelligent middleware platform that is capable of interfacing application level services to the underlying optical technologies. The significance of the architecture is twofold: 1) it encapsulates “optical network resources” into a service framework to support dynamically provisioned and advance scheduled data-intensive transport services, and 2) it establishes a generalized enabling framework for intelligent services and applications over next generation networks, not necessarily optical end-to-end. DWDM-RAM is an implementation version of the architecture, which is conceptual as well as experimental. This architecture has been implemented in prototype on OMNInet, which is an advanced experimental metro area optical testbed that is based on novel architecture, protocols, control plane services (optical dynamic intelligent network-ODIN), and advanced photonic components. This paper presents the concepts behind the DWDM-RAM architecture and its design. The paper also describes an application scenario using the architecture’s data transfer service and network resource services over the agile OMNInet testbed.
Lavian T.; Mambretti J.; Cutrell D.; Cohen H.J; Merrill S.; Durairaj R.; Daspit P.; Monga I.; Naiksatam S.; Figueira S.M.; Gutierrez D.; Hoang D.B., Travostino F.; CCGRID 2004, pp. 762-764.
Next generation applications and architectures (for example, Grids) are driving radical changes in the nature of traffic, service models, technology, and cost, creating opportunities for an advanced communications infrastructure to tackle next generation data services. To take advantage of these trends and opportunities, research communities are creating new architectures, such as the Open Grid Service Architecture (OGSA), which are being implemented in new prototype advanced infrastructures. The DWDM-RAM project, funded by DARPA, is actively addressing the challenges of next generation applications. DWDM-RAM is an architecture for data-intensive services enabled by next generation dynamic optical networks. It develops and demonstrates a novel architecture for new data communication services, within the OGSA context, that allows for managing extremely large sets of distributed data. Novel features move network services beyond notions of the network as a managed resource, for example, by including capabilities for dynamic on-demand provisioning and advance scheduling. DWDM-RAM encapsulates optical network resources (Lambdas, lightpaths) into a Grid service and integrates their management within the Open Grid Service Architecture. Migration to emerging standards such as WS-Resource Framework (WS-RF) should be straightforward. In initial applications, DWDM-RAM targets specific data-intensive services such as rapid, massive data transfers used by large scale eScience applications, including: high-energy physics, geophysics, life science, bioinformatics, genomics, medical morphometry, tomography, microscopy imaging, astronomical and astrophysical imaging, complex modeling, and visualization.
Nguyen C.; Hoang D.B.; Zhao, I.L.; Lavian, T.; Proceedings, 12th IEEE International Conference on Networks 2004. (ICON 2004) Singapore, Volume 2, 16-19 Nov. 2004, pp. 578 – 582.
Current Diffserv architecture lacks mechanisms for network path discovery with specific service performance. Our aim is to introduce an enhanced-Diffserv scheme utilizing a feedback loop to gather path information and allow better flexibility in managing Diffserv flows. We utilize state-of-the-art programmable routers that can host the control loop operation without compromising their normal routing and switching functionalities. Furthermore, the control feedback loop implemented on the control plane of the router can selectively alter the behaviour of a specific data flow in real-time.
Lavian T.; Hoang D.B.; Mambretti J.; Figueira S.; Naiksatam S.; Kaushil N.; Monga I. ; Durairaj R.; Cutrell D.; Merrill S.; Cohen H.; Daspit P.; Travostino F; GridNets 2004, San Jose, CA., October 2004.
Data intensive Grid applications often deal with multiple terabytes and even petabytes of data. For them to be effectively deployed over distances, it is crucial that Grid infrastructures learn how to best exploit high-performance networks (such as agile optical networks). The network footprint of these Grid applications show pronounced peaks and valleys in utilization, prompting for a radical overhaul of traditional network provisioning styles such as peak-provisioning, point-and-click or operator-assisted provisioning. A Grid stack must become capable to dynamically orchestrate a complex set of variables related to application requirements, data services, and network provisioning services, all within a rapidly and continually changing environment. Presented here is a platform that addresses some of these issues. This service platform closely integrates a set of large-scale data services with those for dynamic bandwidth allocation, through a network resource middleware service, using an OGSA-compliant interface allowing direct access by external applications. Recently, this platform has been implemented as an experimental research prototype on a unique wide area optical networking testbed incorporating state-of-the-art photonic components. The paper, which presents initial results of research conducted on this prototype, indicates that these methods have the potential to address multiple major challenges related to data intensive applications. Given the complexities of this topic, especially where scheduling is required, only selected aspects of this platform are considered in this paper.
Dimitra Simeonidou, Reza Nejabati, Bill St. Arnaud, Micah Beck, Peter Clarke, Doan B. Hoang, David Hutchison, Gigi Karmous-Edwards, Tal Lavian, Jason Leigh, Joe Mambretti, Volker Sander, John Strand, Franco Travostino, Global Grid Forum(GGF) GHPN Standard GFD-I.036 August 2004.
During the past years it has become evident to the technical community that computational resources cannot keep up with the demands generated by some applications. As an example, particle physics experiments produce more data than can be realistically processed and stored in one location (i.e. several Petabytes/year). In such situations where intensive computation analysis of shared large scale data is needed, one can try to use accessible computing resources distributed in different locations (combined data and computing Grid).
Distributed computing & the concept of a computational Grid is not a new paradigm but until a few years ago networks were too slow to allow efficient use of remote resources. As the bandwidth and the speed of networks have increased significantly, the interest in distributed computing has taken to a new level. Recent advances in optical networking have created a radical mismatch between the optical transmission world and the electrical forwarding/routing world. Currently, a single strand of optical fiber can transmit more bandwidth than the entire Internet core. What’s more, only 10% of potential wavelengths on 10% of available fiber pairs are actually lit. This represents 1-2% of potential bandwidth that is actually available in the fiber system. The result of this imbalance between supply and demand has led to severe price erosion of bandwidth product. Annual STM-1 (155 Mbit/sec) prices on major European routes have fallen by 85-90% from 1990-2002. Therefore it now becomes technically and economically viable to think of a set of computing, storage or combined computing storage nodes coupled through a high speed network as one large computational and storage device.
The use of the available fiber and DWDM infrastructure for the global Grid network is an attractive proposition ensuring global reach and huge amounts of cheap bandwidth. Fiber and DWDM networks have been great enablers of the World Wide
Web fulfilling the capacity demand generated by Internet traffic and providing global connectivity. In a similar way optical technologies are expected to play an important role in creating an efficient infrastructure for supporting Grid applications.
The need for high throughput networks is evident in e-Science applications. The USA National Science Foundation (NSF) and European Commission have acknowledged this. These applications need very high bandwidth between a limited number of destinations. With the drop of prices for raw bandwidth, a substantial cost is going to be in the router infrastructure in which the circuits are terminated. “The current L3-based architectures can’t effectively transmit Petabytes or even hundreds of Terabytes, and they impede service provided to high-end data-intensive applications. Current HEP projects at CERN and SLAC already generate Petabytes of data. This will reach Exabytes (10^18) by 2012, while the Internet-2 cannot effectively meet today’s transfer needs.”
The present document aims to discuss solutions towards an efficient and intelligent network infrastructure for the Grid taking advantage of recent developments in optical networking technologies.
Figueira S.; Naiksatam S.; Cohen H.; Cutrell D.; Daspit, P.; Gutierrez D.; Hoang D. B.; Lavian T.; Mambretti J.; Merrill S.; Travostino F; Proceedings, 4th IEEE/ACM International Symposium on Cluster Computing and the Grid, Chicago, USA, April 2004, pp. 707-714.
Advances in Grid technology enable the deployment of data-intensive distributed applications, which require moving terabytes or even petabytes of data between data banks. The current underlying networks cannot provide dedicated links with adequate end-to-end sustained bandwidth to support the requirements of these Grid applications. DWDM-RAM is a novel service-oriented architecture, which harnesses the enormous bandwidth potential of optical networks and demonstrates their on-demand usage on the OMNInet. Preliminary experiments suggest that dynamic optical networks, such as the OMNInet, are the ideal option for transferring such massive amounts of data. DWDM-RAM incorporates an OGSI/OGSA compliant service interface and promotes greater convergence between dynamic optical networks and data intensive Grid computing.
Lavian T.; Hoang D.B.; Travostino F.; Wang P.Y.; Subramanian S.; Monga I.; IEEE Transactions on Systems, Man, and Cybernetics on technologies promoting computational intelligence, openness and programmability in networks and Internet services Volume 34, Issue 1, Feb. 2004, pp. 58 – 68.
With their increasingly sophisticated applications, users promote the notion that there is more to a network (be it an intranet, or the Internet) than mere L1-3 connectivity. In what shapes a next generation service contract between users and the network, users want the network to offer services that are as ubiquitous and dependable as dial tones. Typical services include application-aware firewalls, directories, nomadic support, virtualization, load balancing, alternate site failover, etc. To fulfill this vision, a service architecture is needed. That is, an architecture wherein end-to-end services compose, on-demand, across network domains, technologies, and administration boundaries. Such an architecture requires programmable mechanisms and programmable network devices for service enabling, service negotiation, and service management. The bedrock foundation of the architecture, and also the key focus of the paper, is an open-source programmable service platform that is explicitly designed to best exploit commercial-grade network devices. The platform predicates a full separation of concerns, in that control-intensive operations are executed in software, whereas, data-intensive operations are delegated to hardware. This way, the platform is capable of performing wire-speed content filtering, and activating network services according to the state of data and control flows. The paper describes the platform and some distinguishing services realized on the platform.
DWDM-RAM: An Architecture for Data Intensive Service Enabled by Next Generation Dynamic Optical Networks
Lavian, T.; Cutrell, D.; Mambretti, J.; Weinberger J.; Gutierrez D.; Naiksatam S.; Figueira S.; Hoang D. B.; Supercomputing Conference, SC2003 Igniting Innovation, Phoenix, November 2003.
Lavian T.; Wang P.; Durairaj R.; Hoang D.; Travostino F.; Telecommunications, 2003. ICT 2003. 10th International Conference on Telecommunications, Tahiti, Volume 2, 23 Feb.- 1 March 2003, pp. 1441 – 1447.
After a decade of research and development, IP multicast has still not been deployed widely in the global Internet due to many open technical issues: lack of admission control, poorly scaled with large number of groups, and requiring substantial infrastructure modifications. To provide the benefits of IP multicast without requiring direct router support of the presence of a physical broadcast medium, various application level multicast (ALM) models have been attempted. However, there are still several problems with ALM: unnecessary coupling between an application and its multicasting supports, bottleneck problem at network access links and considerable processing power required at the end nodes to support ALM mechanisms. This paper proposes an architecture to address these problems by delegating application-multicasting support mechanisms to smart edge devices associated with the application end nodes. The architecture gives rise to an interesting edge device any-casting technology that lies between the IP-multicasting and the application layer multicasting and enjoys the benefits of both. Furthermore, the architecture may provide sufficient cost-benefit for adoption by service providers. The paper presents initial results obtained from the implementation of a video streaming application over the testbed that implements the proposed architecture.
Raman B.; Agarwal S.; Chen Y.; Caesar M.; Cui W.; Lai K.; Lavian T.; Machiraju S.; Mao Z. M.; Porter G.; Roscoe T.; Subramanian L.; Suzuki T.; Zhuang S.; Joseph A. D.; Katz Y.H.; Stoica I.; Proceedings of the First International Conference on Pervasive Computing. ACM Pervasive 2002, pp. 1 – 14.
Services are capabilities that enable applications and are of crucial importance to pervasive computing in next-generation networks. Service Composition is the construction of complex services from primitive ones; thus enabling rapid and flexible creation of new services. The presence of multiple independent service providers poses new and significant challenges. Managing trust across providers and verifying the performance of the components in composition become essential issues. Adapting the composed service to network and user dynamics by choosing service providers and instances is yet another challenge. In SAHARA, we are developing a comprehensive architecture for the creation, placement, and management of services for composition across independent providers. In this paper, we present a layered reference model for composition based on a classification of different kinds of composition. We then discuss the different overarching mechanisms necessary for the successful deployment of such an architecture through a variety of case-studies involving composition.
Lavian T.; Wang P.; Travostino F.; Subramanian S.; Duraraj R.; Hoang D.B.; Sethaput V.; Culler D.; Proceeding of the Active Networks Conference and Exposition, 2002.(DANCE) 29-30 May 2002, pp. 65 – 76.
A significant challenge arising from today’s increasing Internet traffic is the ability to flexibly incorporate intelligent control in high performance commercial network devices. The paper tackles this challenge by introducing the active flow manipulation (AFM) mechanism to enhance traffic control intelligence of network devices through programmability. With AFM, customer network services can exercise active network control by identifying distinctive flows and applying specified actions to alter network behavior in real-time. These services are dynamically loaded through Openet by the CPU-based control unit of a network node and are closely coupled with its silicon-based forwarding engines, without negatively impacting forwarding performance. AFM is exposed as a key enabling technology of the programmable networking platform Openet. The effectiveness of our approach is demonstrated by four active network services on commercial network nodes.
Subramanian S.; Wang P.; Durairaj R.; Rasimas J.; Travostino F.; Lavian T.; Hoang D.B.; Proceeding of the DARPA Active Networks Conference and Exposition, 2002. 29-30 May 2002, pp. 344 – 354.
The Internet has seen an increase in complexity due to the introduction of new types of networking devices and services, particularly at points of discontinuity known as network edges. As the networking industry continues to add revenue generating services at network edges, there is an increasing need to provide a systematic method for dynamically introducing and providing these new services in lieu of the ad-hoc approach that is in use today. To this end we support a phased approach to “activating” the Internet and suggest that there exists an immediate need for realizing active networks concepts at the network edges. In this context, we present our efforts towards the development of a content-aware active gateway (CAG) architecture. With the help of two practical services running on our initial prototype, built from commercial networking devices, we give a qualitative and quantitative view of the CAG potential.