Ministerio de Industria, Turismo y Comercio LogoMinisterior
 

Redes Neuronais

Resultados 16 resultados
LastUpdate Última actualización 02/05/2025 [07:21:00]
pdfxls
Solicitudes publicadas en los últimos 30 días / Applications published in the last 30 days
Resultados 1 a 16  

Always-On Keyword Detector

NºPublicación:  US2025131920A1 24/04/2025
Solicitante: 
SYNTIANT [US]
SYNTIANT
EP_4361894_PA

Resumen de: US2025131920A1

Provided herein is an integrated circuit including, in some embodiments, a special-purpose host processor, a neuromorphic co-processor, and a communications interface between the host processor and the co-processor configured to transmit information therebetween. The special-purpose host processor is operable as a stand-alone host processor. The neuromorphic co-processor includes an artificial neural network. The co-processor is configured to enhance special-purpose processing of the host processor through the artificial neural network. In such embodiments, the host processor is a keyword identifier processor configured to transmit one or more detected words to the co-processor over the communications interface. The co-processor is configured to transmit recognized words, or other sounds, to the host processor.

SELECTION-INFERENCE NEURAL NETWORK SYSTEMS

NºPublicación:  EP4537264A1 16/04/2025
Solicitante: 
DEEPMIND TECH LTD [GB]
DeepMind Technologies Limited
CN_119452374_PA

Resumen de: WO2024047108A1

Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for generating a response to a query input using a selection-inference neural network.

OPTIMIZING ALGORITHMS FOR TARGET PROCESSORS USING REPRESENTATION NEURAL NETWORKS

NºPublicación:  EP4537196A1 16/04/2025
Solicitante: 
DEEPMIND TECH LTD [GB]
DeepMind Technologies Limited
WO_2024018065_PA

Resumen de: WO2024018065A1

Methods, systems, and apparatus, including computer programs encoded on computer storage media, for optimizing a target algorithm using a state representation neural network.

COMPUTE OPTIMIZATIONS FOR LOW PRECISION MACHINE LEARNING OPERATIONS

NºPublicación:  US2025117874A1 10/04/2025
Solicitante: 
INTEL CORP [US]
Intel Corporation
CN_119440633_PA

Resumen de: US2025117874A1

One embodiment provides an apparatus comprising a memory stack including multiple memory dies and a parallel processor including a plurality of multiprocessors. Each multiprocessor has a single instruction, multiple thread (SIMT) architecture, the parallel processor coupled to the memory stack via one or more memory interfaces. At least one multiprocessor comprises a multiply-accumulate circuit to perform multiply-accumulate operations on matrix data in a stage of a neural network implementation to produce a result matrix comprising a plurality of matrix data elements at a first precision, precision tracking logic to evaluate metrics associated with the matrix data elements and indicate if an optimization is to be performed for representing data at a second stage of the neural network implementation, and a numerical transform unit to dynamically perform a numerical transform operation on the matrix data elements based on the indication to produce transformed matrix data elements at a second precision.

SYSTEM ON A CHIP WITH DEEP LEARNING ACCELERATOR AND RANDOM ACCESS MEMORY

NºPublicación:  US2025117659A1 10/04/2025
Solicitante: 
MICRON TECH INC [US]
Micron Technology, Inc
US_2023004804_PA

Resumen de: US2025117659A1

Systems, devices, and methods related to a deep learning accelerator and memory are described. An integrated circuit may be configured with: a central processing unit, a deep learning accelerator configured to execute instructions with matrix operands; random access memory configured to store first instructions of an artificial neural network executable by the deep learning accelerator and second instructions of an application executable by the central processing unit; one or connections among the random access memory, the deep learning accelerator and the central processing unit; and an input/output interface to an external peripheral bus. While the deep learning accelerator is executing the first instructions to convert sensor data according to the artificial neural network to inference results, the central processing unit may execute the application that uses inference results from the artificial neural network.

METHOD, MEDIUM, AND SYSTEM FOR INTELLIGENT ONLINE PERSONAL ASSISTANT WITH IMAGE TEXT LOCALIZATION

NºPublicación:  US2025117839A1 10/04/2025
Solicitante: 
EBAY INC [US]
eBay Inc
US_2021224877_A1

Resumen de: US2025117839A1

Systems, methods, and computer program products for identifying a candidate product in an electronic marketplace based on a visual comparison between candidate product image visual text content and input query image visual text content. Unlike conventional optical character recognition (OCR) based systems, embodiments automatically localize and isolate portions of a candidate product image and an input query image that each contain visual text content, and calculate a visual similarity measure between the respective portions. A trained neural network may be re-trained to more effectively find visual text content by using the localized and isolated visual text content portions as additional ground truths. The visual similarity measure serves as a visual search result score for the candidate product. Any number of images of any number of candidate products may be compared to an input query image to enable text-in-image based product searching without resorting to conventional OCR techniques.

HIGH AVAILABILITY AI VIA A PROGRAMMABLE NETWORK INTERFACE DEVICE

NºPublicación:  US2025117673A1 10/04/2025
Solicitante: 
INTEL CORP [US]
Intel Corporation

Resumen de: US2025117673A1

Techniques described herein address the above challenges that arise when using host executed software to manage vector databases by providing a vector database accelerator and shard management offload logic that is implemented within hardware and by software executed on device processors and programmable data planes of a programmable network interface device. In one embodiment, a programmable network interface device includes infrastructure management circuitry configured to facilitate data access for a neural network inference engine having a distributed data model via dynamic management of a node associated with the neural network inference engine, the node including a database shard of a vector database.

SELF-SUPERVISED LEARNING FOR AUDIO PROCESSING

NºPublicación:  US2025118291A1 10/04/2025
Solicitante: 
GOOGLE LLC [US]
Google LLC
JP_2025505962_PA

Resumen de: US2025118291A1

Methods, computer systems, and apparatus, including computer programs encoded on computer storage media, for training an audio-processing neural network that includes at least (1) a first encoder network having a first set of encoder network parameters and (2) a decoder network having a set of decoder network parameters. The system obtains a set of un-labeled audio data segments, and generates, from the set of un-labeled audio data segments, a set of encoder training examples. The system performs training of a second encoder neural network that includes at least the first encoder neural network on the set of generated encoder training examples. The system also obtains one or more labeled training examples, and performs training of the audio-processing neural network on the labeled training examples.

ITERATIVE MACHINE LEARNING INTERATOMIC POTENTIAL (MLIP) TRAINING METHODS

NºPublicación:  US2025117685A1 10/04/2025
Solicitante: 
BOSCH GMBH ROBERT [DE]
Robert Bosch GmbH
US_2025117685_PA

Resumen de: US2025117685A1

An iterative machine learning interatomic potential (MLIP) training method. The training method includes training a first multiplicity of first MLIP models in a first iteration of a training loop. The training method further includes training a second multiplicity of second MLIP models in a second iteration of the training loop in parallel with the first training step. The training method also includes combining the first MLIP models and the second MLIP models to create an iteratively trained MLIP configured to predict one or more values of a material. The MLIP may be a Gaussian Process (GP) based MLIP (e.g., FLARE). The MLIP may be a graph neural network (GNN) based MLIP (e.g., NequIP or Allegro).

CONTROL SYSTEM WITH OPTIMIZATION OF NEURAL NETWORK PREDICTOR

NºPublicación:  EP4533342A1 09/04/2025
Solicitante: 
IMUBIT ISRAEL LTD [IL]
IMUBIT ISRAEL LTD
AU_2023280790_PA

Resumen de: AU2023280790A1

A predictive control system includes controllable equipment and a controller. The controller is configured to use a neural network model to predict values of controlled variables predicted to result from operating the controllable equipment in accordance with corresponding values of manipulated variables, use the values of the controlled variables predicted by the neural network model to evaluate an objective function that defines a control objective as a function of at least the controlled variables, perform a predictive optimization process to generate optimal values of the manipulated variables for a plurality of time steps in an optimization period using the neural network model and the objective function, and operate the controllable equipment by providing the controllable equipment with control signals based on the optimal values of the manipulated variables generated by performing the predictive optimization process.

PARTICLE-BASED SIMULATORS OF PHYSICAL ENVIRONMENTS

NºPublicación:  WO2025068599A1 03/04/2025
Solicitante: 
DEEPMIND TECH LIMITED [GB]
DEEPMIND TECHNOLOGIES LIMITED

Resumen de: WO2025068599A1

A system that uses a graph neural network to determine a representation of a physical environment at a new time step The new time step can be before or after a current time step based on representations of the physical environment at the current time step and one or more other time steps, e.g. one or more time steps before and/or after the current time step. The representation of the physical environment at the new time step may, for example, be used to generate an image of the physical environment at the new time step. The system can be used for controlling a robot interacting with the physical environment. Some examples of the techniques are specifically adapted for implementation using hardware accelerator units.

APPARATUS, METHOD AND STORAGE MEDIUM FOR IMAGE ENCODING/DECODING TO PERFORM INTRA PREDICTION USING ARTIFICIAL NEUTRAL NETWORK

NºPublicación:  US2025113055A1 03/04/2025
Solicitante: 
ELECTRONICS AND TELECOMMUNICATIONS RES INSTITUTE [KR]
Electronics and Telecommunications Research Institute
US_2025113055_PA

Resumen de: US2025113055A1

Disclosed herein are a method, an apparatus, and a storage medium for image encoding/decoding. An intra-prediction mode for the target block is derived, and intra-prediction for the target block that uses the derived intra-prediction mode is performed. The intra-prediction mode for the target block is derived using an artificial neural network, and an MPM list for the target block is derived using information about the target block, pieces of information about blocks adjacent to the target block, and the artificial neural network. The artificial neural network outputs one or more available intra-prediction modes. Further, the artificial neural network outputs match probabilities for one or more candidate intra-prediction modes, and each of the match probabilities for the candidate intra-prediction modes indicates a probability that the corresponding candidate intra-prediction mode matches the intra-prediction mode for the target block.

PLACEMENT OF COMPUTE AND MEMORY FOR ACCELERATED DEEP LEARNING

NºPublicación:  US2025110808A1 03/04/2025
Solicitante: 
CEREBRAS SYSTEMS INC [US]
Cerebras Systems Inc
US_2025110808_PA

Resumen de: US2025110808A1

Techniques in placement of compute and memory for accelerated deep learning provide improvements in one or more of accuracy, performance, and energy efficiency. An array of processing elements comprising a portion of a neural network accelerator performs flow-based computations on wavelets of data. Each processing element comprises a compute element to execute programmed instructions using the data and a router to route the wavelets. The routing is in accordance with virtual channel specifiers of the wavelets and controlled by routing configuration information of the router. A software stack determines placement of compute resources and memory resources based on a description of a neural network. The determined placement is used to configure the routers including usage of the respective colors. The determined placement is used to configure the compute elements including the respective programmed instructions each is configured to execute.

AN ELECTRONIC DEVICE COMPRISING A DEEP NEURAL NETWORK MODEL AND A METHOD FOR OPERATING THE SAME

NºPublicación:  KR20250047001A 03/04/2025
Solicitante: 
삼성전자주식회사
WO_2025071378_PA

Resumen de: WO2025071378A1

Provided are: an electronic device configured to obtain output data having a high bit resolution for 8-bit quantized input data through a deep neural network operation using a digital signal processor (or an artificial intelligence dedicated processor) for performing an 8-bit operation; and an operation method therefor. The electronic device can: quantize a pixel value of an image obtained through a camera to eight bits; input the quantized 8-bit image to a deep neural network model of a digital signal processor for performing an 8-bit operation; obtain first output data having an 8-bit output value, by performing inference by the deep neural network model; obtain 8-bit second output data having a bin number, within a preset output value range; and obtain an output image by combining the first output data and the second output data.

ELECTRONIC DEVICE COMPRISING DEEP NEURAL NETWORK MODEL AND OPERATION METHOD THEREFOR

NºPublicación:  WO2025071378A1 03/04/2025
Solicitante: 
SAMSUNG ELECTRONICS CO LTD [KR]
\uC0BC\uC131\uC804\uC790 \uC8FC\uC2DD\uD68C\uC0AC

Resumen de: WO2025071378A1

Provided are: an electronic device configured to obtain output data having a high bit resolution for 8-bit quantized input data through a deep neural network operation using a digital signal processor (or an artificial intelligence dedicated processor) for performing an 8-bit operation; and an operation method therefor. The electronic device can: quantize a pixel value of an image obtained through a camera to eight bits; input the quantized 8-bit image to a deep neural network model of a digital signal processor for performing an 8-bit operation; obtain first output data having an 8-bit output value, by performing inference by the deep neural network model; obtain 8-bit second output data having a bin number, within a preset output value range; and obtain an output image by combining the first output data and the second output data.

Knowledge-Driven Recommendation System for Conversation - Ontology and Taxonomy Binding for Recommendation System

Nº publicación: US2025111193A1 03/04/2025

Solicitante:

FORUM SYSTEMS INC [US]
Forum Systems, Inc

US_2025111193_PA

Resumen de: US2025111193A1

A knowledge-driven recommendation system comprises a processor and a memory with computer code instructions. The executed code instructions cause the system to receive a user query, extract a topic from the query, and submit the topic and query to a neural network. The instructions may further cause the system to return, from the neural network, a collection of taxonomy and ontology pairs, and use the pairs to select information that expands on the query and topic. The taxonomy and ontology pairs are the closest matched pairs from a knowledge graph. The closest matched pairs are retrieved when the input taxonomy topic semantically matches closest to a taxonomy topic from the custom neural network, the input ontology semantically matches closest to an ontology from the custom neural network, and the taxonomy topic from the custom neural network matches closest to one of the entities in the ontology of the neural network.

traducir