Patents

As an inventor of more than 120 issued and filed patents—over 60 prosecuted pro se in front of the US Patent and Trademark Office (USPTO)—Dr. Lavian boasts an insider’s grasp of patents touching on telecommunications, network communications, Internet protocols, mobile wireless, and Internet web technologies.
As a principal scientist, architect, engineer and software developer with Nortel Networks, he created and chaired the company’s EDN Patent Committee. From this vital post Dr. Lavian helped secure Nortel’s IP rights to a continuous stream of innovative ideas for switches, routers, and network communications devices. And he enriched the company’s IP portfolio by evaluating and enhancing its existing technology assets.
Dr. Lavian’s patents extend to all facets of intellectual property undergirding telecommunications, network communications, Internet protocols, and mobile wireless, including Wi-Fi and 802.11 (a/b/g/n/ac), MAC, FHY, OFDM, DSSS, Wireless LAN (WLAN), TCP/IP suite, TCP, UDP, IP, Ethernet, 802.3, LAN, WAN, VPN, routing protocols, PSTN, circuit switching, IP telephony, VoIP, SIP, RTP, SS7, video/audio.

You’ll find Dr. Lavian’s name as an inventor attached to the following patents:

An apparatus and method for dynamic assignment of classes of traffic to a priority queue. Bandwidth consumption by one or more types of packet traffic received in the packet forwarding device is monitored to determine whether the bandwidth consumption exceeds a threshold. If the bandwidth consumption exceeds the threshold, assignment of at least one type of packet traffic of the one or more types of packet traffic is changed from a queue having a first priority to a queue having a second priority.
Embodiments of the invention provide a portable device comprising at least one processor. The portable device also comprises a memory coupled to the processor comprising data. Further, the portable device comprises a detector configured to detect at least one external device. The at least one external device is configured to connect to the portable device. Further, the portable device comprises an interface to connect to the at least one external device. The interface is configured to transmit or receive one or more control signals excluding the data. Furthermore, the portable device comprises a controller configured to enable controlling of the portable device from the at least one external device; and controlling of the at least one external device from the portable device through the interface.
A Grid Proxy Architecture for Network Resources (GPAN) is proposed to allow Grid applications to access resources shared in communication network domains. GPAN bridges Grid services serving user applications and network services controlling network devices through its proxy functions such as resource data and management proxies. Working with Grid resource index and broker services, GPAN employs distributed network service peers (NSP) in network domains to discover, negotiate and allocate network resources such as bandwidth for Grid applications. An elected master NSP is the unique Grid node that runs GPAN and represents the whole network to share network resources to Grids without Grid involvement of network devices. GPAN provides the Grid Proxy service (GPS) to interface with Grid services and applications, and the Grid Delegation service (GDS) to interface with network services to utilize network resources. Resource-based XML messaging is employed for the GPAN proxy communication.
A network element (NE) includes an intelligent interface (II) with its own operating environment rendering it active during the NE boot process, and with separate intelligence allowing it to take actions on the NE prior to, during, and after the boot process. The combination of independent operation and increased intelligence provides enhanced management opportunities to enable the NE to be controlled throughout the boot process and after completion of the boot process. For example, files may be uploaded to the NE before or during the boot process to restart the NE from a new software image. The II allows this downloading process to occur in parallel on multiple NEs from a centralized storage resource. Diagnostic checks may be run on the NE, and files, and MIB information, and other data may be transmitted from the II to enable a network manager to more effectively manage the NE.
A method and apparatus has been shown and described which allows Quality of Service to be controlled at a temporal granularity. Time-value curves, generated for each task, ensure that mission resources are utilized in a manner which optimizes mission performance. It should be noted, however, that although the present invention has shown and described the use of time-value curves as applied to mission workflow tasks, the present invention is not limited to this application; rather, it can be readily appreciated by one of skill in the art that time-value curves may be used to optimize the delivery of any resource to any consumer by taking into account the dynamic environment of the consumer and resource.
A Grid Proxy Architecture for Network Resources (GPAN) is proposed to allow Grid applications to access resources shared in communication network domains. GPAN bridges Grid services serving user applications and network services controlling network devices through its proxy functions such as resource data and management proxies. Working with Grid resource index and broker services, GPAN employs distributed network service peers (NSP) in network domains to discover, negotiate and allocate network resources such as bandwidth for Grid applications. An elected master NSP is the unique Grid node that runs GPAN and represents the whole network to share network resources to Grids without Grid involvement of network devices. GPAN provides the Grid Proxy service (GPS) to interface with Grid services and applications, and the Grid Delegation service (GDS) to interface with network services to utilize network resources. Resource-based XML messaging is employed for the GPAN proxy communication.
Network resources allocated for particular application traffic are aware of the characteristics of L4+ content to be transmitted. One embodiment of the invention realizes network resource allocation in terms of three intelligent modules, gateway, provisioning and classification. A gateway module exerts network control functions in response to application requests for network resources. The network control functions include traffic path setup, bandwidth allocation and so on. Characteristics of the content are also specified in the received application network resource requests. Under request of the gateway module, a provisioning module allocates network resources such as bandwidth in optical networks and edge devices as well. An optical network resource allocation leads to a provisioning optical route. Under request of the gateway module, a classification module differentiates applications traffic according to content specifications, and thus creates and applies content-aware rule data for edge devices to forward content-specified traffic towards respective provisioning optical routes.
The present invention facilitates routing traffic over a network and distributing application level support among multiple routing devices during routing. Routing nodes are configured to process the content of the traffic to provide the requisite application level support. The traffic is routed, in part, based on the resources available for providing the processing. The processing of the traffic may be distributed throughout the network based on processing capacity of the routing nodes at any given time and given the amount of network congestion.
A network element (NE) includes an intelligent interface (II) with its own operating environment rendering it active during the NE boot process, and with separate intelligence allowing it to take actions on the NE prior to, during, and after the boot process. The combination of independent operation and increased intelligence provides enhanced management opportunities to enable the NE to be controlled throughout the boot process and after completion of the boot process. For example, files may be uploaded to the NE before or during the boot process to restart the NE from a new software image. The II allows this downloading process to occur in parallel on multiple NEs from a centralized storage resource. Diagnostic checks may be run on the NE, and files, and MIB information, and other data may be transmitted from the II to enable a network manager to more effectively manage the NE.
An apparatus and method for dynamic assignment of classes of traffic to a priority queue. Bandwidth consumption by one or more types of packet traffic received in the packet forwarding device is monitored to determine whether the bandwidth consumption exceeds a threshold. If the bandwidth consumption exceeds the threshold, assignment of at least one type of packet traffic of the one or more types of packet traffic is changed from a queue having a first priority to a queue having a second priority.
Network resources allocated for particular application traffic are aware of the characteristics of L4+ content to be transmitted. One embodiment of the invention realizes network resource allocation in terms of three intelligent modules, gateway, provisioning and classification. A gateway module exerts network control functions in response to application requests for network resources. The network control functions include traffic path setup, bandwidth allocation and so on. Characteristics of the content are also specified in the received application network resource requests. Under request of the gateway module, a provisioning module allocates network resources such as bandwidth in optical networks and edge devices as well. An optical network resource allocation leads to a provisioning optical route. Under request of the gateway module, a classification module differentiates applications traffic according to content specifications, and thus creates and applies content-aware rule data for edge devices to forward content-specified traffic towards respective provisioning optical routes.
A computer-implemented method and system of distributing management of network resources on a network to network devices is provided. During execution, the system receives a request on a network device to execute a task that performs a set of operations related to managing the network, receives an application over the network wherein the application includes operations for performing the task, processes operations on the network device that requests network parameters from a remote network device, transmits the request for the network parameter over the network to the remote network, and receives the requested network parameter over the network from the remote network device.
External resources may be interfaced with a network element using an intelligent interface including an independent processing environment to enable the operational capabilities of the network element to be enhanced. The intelligent interface may serve as an interface to external resources such as network software repositories, storage servers, logging facilities and security services. By providing an intelligent interface, it is possible to interface external resources and enhanced services to the network element while allowing processing requirements to be offloaded to an external device or to the intelligent interface itself, so that the resources of the network element may be more fully utilized to perform network operations such as switching and routing functions. The intelligent interface also enables new resources to be made available to the network element when they are needed. An external communication port of the intelligent interface may be configured to operate using one of the USB standards.
An XML accessible network device is capable of performing functions in response to an XML encoded request transmitted over a network. It includes a network data transfer service, coupled to a network, that is capable of receiving XML encoded requests from a client also connected to the network. An XML engine is capable of understanding and parsing the XML encoded requests according to a corresponding DTD. The XML engine further instantiates a service using parameters provided in the XML encoded request and launches the service for execution on the network device. A set of device APIs interacts with hardware and software on the network device for executing the requested service on the network device. If necessary, a response is further collected from the device and provided to the client in a response message.
A system and method is provided for using an object-oriented interface for network management. An example system and method receives a management information base (MIB) including information related to one or more aspects of a network device, extracts a subset of information from the MIB describing at least one aspect of the network device, and generates a set of object-oriented classes and object-oriented methods corresponding to the subset of information in the MIB. In addition, this system and method interfaces with network management information on a network device, by providing a management information base (MIB) including information related to one or more aspects of a network device, and using a set of object-oriented classes and object-oriented methods that corresponds to the MIB and information related to one or more aspects of the network device.
A data communication network for DiffServ communications includes a customized Java socket factory added to clients connected to a data communication network having a DiffServ-enabled edge router. When an application running on a client system wishes to make a remote procedure call to a remote server system on another network, it makes a call to an RMI stub which invokes an RMI transport layer having the custom socket factory to generate a socket used in the RMI call. The custom socket factory detects when a high priority RMI call is being made and can determine the identity of the calling procedure as well. The socket factory makes a side channel communication to the edge router to provide this information to the edge router, which then makes use of this data when performing DiffServ classification for packets transmitted during the course of the call.
The present invention relates to an apparatus and method for dynamically loading and managing software services on a network device. A service environment ported to the network device includes a service environment kernel and a virtual machine. The service environment kernel continually operates on the network device and manages the downloading of services from a remote location onto the network device. In accordance with a request from a remote client such as a network manager, the service environment kernel causes instructions corresponding to the downloaded service to be provided to the virtual machine for execution on the network device. Associated with the service are service relationships. The service environment kernel manages these relationships by maintaining a registry of services and their dependencies on other services. The service environment kernel also controls the execution of services in accordance with the service relationships.
A data communication network for DiffServ communications has a software library added to clients connected to a data communication network having a DiffServ-enabled edge router. When an application running on a client system wishes to make a remote procedure call to a remote server system on another network, it makes its usual call for RPC invocation using the software library. This RPC call is intercepted by a protocol layer interposed between the application layer and the underlying RPC transport code. The protocol layer detects when an RPC call is being made and can determine the identity of the calling procedure as well. The library makes a side channel communication to the edge router to provide this information to the edge router or alternative service decider, which then makes use of this data when performing DiffServ classification for packets transmitted during the course of the call.
A method of managing a network device, includes providing a command-line interface application programming interface (CLI-API) compatible with a command-line interface (CLI) of the network device, receiving instructions from an application that calls one or more routines in the CLI application programming interface, and generating at least one command in response to receiving instructions from the application wherein the at least one command is compatible with the CLI of the network device. An apparatus includes a remote serial command-line interface (RS-CLI) device having a storage device capable of storing instructions, a network port capable of being connected to the network and capable of processing a network protocol stack in addition to receiving the instructions, a serial port capable of processing a serial protocol and capable of being connected to the non-application enabled network device, and a processor capable of processing instructions stored in the storage area of the RS-CLI device.
A method and system provides access to information about a resource associated with a network device. The method and system selects a layer for communicating with the requested resource associated with the network device in a network protocol stack having multiple layers, establishes an inner layer socket for communicating at the selected layer using an inner layer application programming interface (IL API) and a socket identifier associated with the requested resource, wherein the inner layer socket communicates using the selected layer and bypasses other layers in the network protocol stack, transmits the request for information about the resource through the inner layer socket and the socket identifier, receives the information about the resource in response to the transmission made through the inner layer socket, and passes the information about the resource through the inner layer socket to the application making the request.