Resumen de: US2025139409A1
Computer systems and computer-implemented methods train a neural network, by:(a) computing for each datum in a set of training data, activation values for nodes in the neural network and estimates of partial derivatives of an objective function for the neural network for the nodes in the neural network; (b) selecting a target node of the neural network and/or a target datum in the set of training data; (c) selecting a target-specific improvement model for the neural network, wherein the target-specific improvement model, when added to the neural network, improves performance of the neural network for the target node and/or the target datum, as the case may be; (d) training the target-specific improvement model; (e) merging the target-specific improvement model with the neural network to form an expanded neural network; and (f) training the expanded neural network.
Resumen de: US2025138544A1
A system for managing energy usage in a vehicle is provided. The system includes a vehicle control system, an artificial intelligence system, and an energy management module. The vehicle control system is operable to adjust at least one operational parameter of the vehicle. The artificial intelligence (AI) system includes a hybrid neural network configured to: process vehicle operational state and energy consumption information; classify a plurality of operational states of the vehicle; and determine an optimized vehicle operating state based on the classified operational states. The energy management module is coupled to the AI system and the vehicle control system, and: receives operational state and energy consumption information from the vehicle; and modifies the at least one operational parameter to optimize electricity usage of the vehicle.
Resumen de: US2025131920A1
Provided herein is an integrated circuit including, in some embodiments, a special-purpose host processor, a neuromorphic co-processor, and a communications interface between the host processor and the co-processor configured to transmit information therebetween. The special-purpose host processor is operable as a stand-alone host processor. The neuromorphic co-processor includes an artificial neural network. The co-processor is configured to enhance special-purpose processing of the host processor through the artificial neural network. In such embodiments, the host processor is a keyword identifier processor configured to transmit one or more detected words to the co-processor over the communications interface. The co-processor is configured to transmit recognized words, or other sounds, to the host processor.
Resumen de: WO2024047108A1
Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for generating a response to a query input using a selection-inference neural network.
Resumen de: WO2024018065A1
Methods, systems, and apparatus, including computer programs encoded on computer storage media, for optimizing a target algorithm using a state representation neural network.
Resumen de: US2025117659A1
Systems, devices, and methods related to a deep learning accelerator and memory are described. An integrated circuit may be configured with: a central processing unit, a deep learning accelerator configured to execute instructions with matrix operands; random access memory configured to store first instructions of an artificial neural network executable by the deep learning accelerator and second instructions of an application executable by the central processing unit; one or connections among the random access memory, the deep learning accelerator and the central processing unit; and an input/output interface to an external peripheral bus. While the deep learning accelerator is executing the first instructions to convert sensor data according to the artificial neural network to inference results, the central processing unit may execute the application that uses inference results from the artificial neural network.
Resumen de: US2025117839A1
Systems, methods, and computer program products for identifying a candidate product in an electronic marketplace based on a visual comparison between candidate product image visual text content and input query image visual text content. Unlike conventional optical character recognition (OCR) based systems, embodiments automatically localize and isolate portions of a candidate product image and an input query image that each contain visual text content, and calculate a visual similarity measure between the respective portions. A trained neural network may be re-trained to more effectively find visual text content by using the localized and isolated visual text content portions as additional ground truths. The visual similarity measure serves as a visual search result score for the candidate product. Any number of images of any number of candidate products may be compared to an input query image to enable text-in-image based product searching without resorting to conventional OCR techniques.
Resumen de: US2025117673A1
Techniques described herein address the above challenges that arise when using host executed software to manage vector databases by providing a vector database accelerator and shard management offload logic that is implemented within hardware and by software executed on device processors and programmable data planes of a programmable network interface device. In one embodiment, a programmable network interface device includes infrastructure management circuitry configured to facilitate data access for a neural network inference engine having a distributed data model via dynamic management of a node associated with the neural network inference engine, the node including a database shard of a vector database.
Resumen de: US2025118291A1
Methods, computer systems, and apparatus, including computer programs encoded on computer storage media, for training an audio-processing neural network that includes at least (1) a first encoder network having a first set of encoder network parameters and (2) a decoder network having a set of decoder network parameters. The system obtains a set of un-labeled audio data segments, and generates, from the set of un-labeled audio data segments, a set of encoder training examples. The system performs training of a second encoder neural network that includes at least the first encoder neural network on the set of generated encoder training examples. The system also obtains one or more labeled training examples, and performs training of the audio-processing neural network on the labeled training examples.
Resumen de: US2025117874A1
One embodiment provides an apparatus comprising a memory stack including multiple memory dies and a parallel processor including a plurality of multiprocessors. Each multiprocessor has a single instruction, multiple thread (SIMT) architecture, the parallel processor coupled to the memory stack via one or more memory interfaces. At least one multiprocessor comprises a multiply-accumulate circuit to perform multiply-accumulate operations on matrix data in a stage of a neural network implementation to produce a result matrix comprising a plurality of matrix data elements at a first precision, precision tracking logic to evaluate metrics associated with the matrix data elements and indicate if an optimization is to be performed for representing data at a second stage of the neural network implementation, and a numerical transform unit to dynamically perform a numerical transform operation on the matrix data elements based on the indication to produce transformed matrix data elements at a second precision.
Resumen de: US2025117685A1
An iterative machine learning interatomic potential (MLIP) training method. The training method includes training a first multiplicity of first MLIP models in a first iteration of a training loop. The training method further includes training a second multiplicity of second MLIP models in a second iteration of the training loop in parallel with the first training step. The training method also includes combining the first MLIP models and the second MLIP models to create an iteratively trained MLIP configured to predict one or more values of a material. The MLIP may be a Gaussian Process (GP) based MLIP (e.g., FLARE). The MLIP may be a graph neural network (GNN) based MLIP (e.g., NequIP or Allegro).
Nº publicación: EP4533342A1 09/04/2025
Solicitante:
IMUBIT ISRAEL LTD [IL]
IMUBIT ISRAEL LTD
Resumen de: AU2023280790A1
A predictive control system includes controllable equipment and a controller. The controller is configured to use a neural network model to predict values of controlled variables predicted to result from operating the controllable equipment in accordance with corresponding values of manipulated variables, use the values of the controlled variables predicted by the neural network model to evaluate an objective function that defines a control objective as a function of at least the controlled variables, perform a predictive optimization process to generate optimal values of the manipulated variables for a plurality of time steps in an optimization period using the neural network model and the objective function, and operate the controllable equipment by providing the controllable equipment with control signals based on the optimal values of the manipulated variables generated by performing the predictive optimization process.