Ministerio de Industria, Turismo y Comercio LogoMinisterior
 

Alerta

Resultados 29 resultados
LastUpdate Última actualización 25/09/2025 [07:33:00]
pdfxls
Solicitudes publicadas en los últimos 30 días / Applications published in the last 30 days
Resultados 1 a 25 de 29 nextPage  

EFFICIENCY ADJUSTABLE SPEECH RECOGNITION SYSTEM

NºPublicación:  EP4621769A2 24/09/2025
Solicitante: 
MICROSOFT TECHNOLOGY LICENSING LLC [US]
Microsoft Technology Licensing, LLC
EP_4621769_PA

Resumen de: EP4621769A2

A computing system is configured to generate a transformer-transducer-based deep neural network. The transformer-transducer-based deep neural network comprises a transformer encoder network and a transducer predictor network. The transformer encoder network has a plurality of layers, each of which includes a multi-head attention network sublayer and a feed-forward network sublayer. The computing system trains an end-to-end (E2E) automatic speech recognition (ASR) model, using the transformer-transducer-based deep neural network. The E2E ASR model has one or more adjustable hyperparameters that are configured to dynamically adjust an efficiency or a performance of E2E ASR model when the E2E ASR model is deployed onto a device or executed by the device.

AI MODEL PROTECTION FOR AI PCS

NºPublicación:  US2025292357A1 18/09/2025
Solicitante: 
INTEL CORP [US]
Intel Corporation
US_2025292357_PA

Resumen de: US2025292357A1

One embodiment provides a graphics processor comprising a base die including a plurality of chiplet sockets and a plurality of chiplets coupled with the plurality of chiplet sockets. At least one of the plurality of chiplets include a graphics processing cluster including a plurality of processing resources. The plurality of processing resources including a matrix accelerator having circuitry to perform operations for a neural network in which model topology and weights of the neural network are encrypted. The matrix accelerator configured to execute commands of a command buffer, the commands generated based on a decomposition of the model topology of the neural network and access encrypted weights in memory of the graphics processor via circuitry configured to decrypt the encrypted weights via a key that is programmed to the hardware of the circuitry.

ARTIFICIAL NEURAL NETWORK COMPUTING SYSTEMS

NºPublicación:  US2025291405A1 18/09/2025
Solicitante: 
CIRRUS LOGIC INT SEMICONDUCTOR LTD [GB]
Cirrus Logic International Semiconductor Ltd
US_2025291405_PA

Resumen de: US2025291405A1

The present disclosure relates to an artificial neural network (ANN) computing system comprising: a buffer configured to store data indicative of input data received from an input device; an inference engine operative to process data from the buffer to generate an interest metric for the input data; and a controller. The controller is operative to control a mode of operation of the inference engine according to the interest metric for the input data.

NEURAL NETWORK TRAINING FOR VIDEO GENERATION

NºPublicación:  US2025292125A1 18/09/2025
Solicitante: 
REVEALIT CORP [US]
Revealit Corporation
US_2025292125_PA

Resumen de: US2025292125A1

A computer-implemented video generation training method and system performs unsupervised training of neural networks using training sets that comprise images, which may be sequentially arranged as videos. The unsupervised training includes obscuring subsets of pixels that are within each of the images. During the training the neural networks automatically learn correspondences among subsets of pixels in the images. An instruction is received from a user and representations of pixel patterns are generated by the trained computer-implemented neural networks in response to the instruction. The pixel patterns are included within a video stream that is provided to the user.

OPTICAL INFORMATION READING DEVICE

NºPublicación:  US2025292044A1 18/09/2025
Solicitante: 
KEYENCE CO LTD [JP]
Keyence Corporation
US_2025292044_PA

Resumen de: US2025292044A1

To suppress an increase in processing time due to a load of inference processing while improving reading accuracy by the inference processing of machine learning. An optical information reading device includes a processor including: an inference processing part that inputs a code image to a neural network and executes inference processing of generating an ideal image corresponding to the code image; and a decoding processing part that executes first decoding processing of decoding the code image and second decoding processing of decoding the ideal image generated by the inference processing part. The processor executes the inference processing and the first decoding processing in parallel, and executes the second decoding processing after completion of the inference processing.

HIERARCHY OF NEURAL NETWORK SCALING FACTORS

NºPublicación:  US2025292362A1 18/09/2025
Solicitante: 
INTEL CORP [US]
Intel Corporation
US_2025292362_PA

Resumen de: US2025292362A1

Embodiments described herein provide techniques to facilitate hierarchical scaling when quantizing neural network data to a reduced-bit representation. The techniques includes operations to load a hierarchical scaling map for a tensor associated with a neural network, partition the tensor into a plurality of regions that respectively include one or more subregions based on the hierarchical scaling map, hierarchically scale numerical values of the tensor based on a first scale factor and second scale factor via the matrix accelerator circuitry, the first scale factor based on a statistical measure of a subregion of numerical values of within a region of the plurality of regions and the second scale factor based on a statistical measure of the region that includes the subregion, and generate a quantized representation of the tensor via quantization of hierarchically scaled numerical values.

SPACE EFFICIENT TRAINING FOR SEQUENCE TRANSDUCTION MACHINE LEARNING

NºPublicación:  US2025292764A1 18/09/2025
Solicitante: 
MICROSOFT TECHNOLOGY LICENSING LLC [US]
Microsoft Technology Licensing, LLC
US_2025292764_PA

Resumen de: US2025292764A1

Efficient training is provided for models comprising RNN-T (recurrent neural network transducers). The model transducers comprise an encoder, a decoder, and a fused joint network. The fused joint network receives encoding and decoding embeddings from the encoder and decoder. During training, the model stores the probability data for the next blank output and the next token at each time step rather than storing all probabilities for all possible outputs. This can significantly reduce requirements for memory storage, while still preserving the relevant information required to calculate the loss that will be backpropagated through the neural transducer during training to update the parameters of the neural transducer and to generate a trained or modified neural transducer. The computation of embeddings can also be divided into small slices and some of the utterance padding used for the training samples can also be removed to further reduce the memory storage requirements.

AN EVOLUTIONARY SCHEME FOR TUNING STOCHASTIC NEUROMORPHIC OPTIMIZERS

NºPublicación:  WO2025191315A1 18/09/2025
Solicitante: 
ERICSSON TELEFON AB L M [SE]
TELEFONAKTIEBOLAGET LM ERICSSON (PUBL)
WO_2025191315_PA

Resumen de: WO2025191315A1

A computer-implemented method (200) for automated tuning of a stochastic spiking neural network (SSNN) for solving a combinatorial optimization problem (COP). The method includes (i) defining (205) a set of features of the COP. The method further includes (ii) building (210) the SSNN with an architecture based on the defined COP feature set. The method further includes (iii) selecting (215) a tunable set of parameters of the SSNN based on the architecture. The method further includes (iv) tuning (220) the selected tunable set of parameters using using a genetic algorithm - SSNN (GA-SSNN) model. The method further includes (v) implementing (225) the SSNN using the tuned set of parameters. The method further includes (vi) evaluating (230) the performance of the SSNN to determine whether a pre-determined criteria for solving the COP is met. The method further includes (vii) repeating (235) steps (iv) to (vi) until the performance meets the pre-determined criteria. The method further includes (viii) obtaining (240) the solution for solving the COP.

HIERARCHY OF NEURAL NETWORK SCALING FACTORS

NºPublicación:  EP4617952A1 17/09/2025
Solicitante: 
INTEL CORP [US]
INTEL Corporation
EP_4617952_PA

Resumen de: EP4617952A1

Embodiments described herein provide techniques to facilitate hierarchical scaling when quantizing neural network data to a reduced-bit representation. The techniques includes operations to load a hierarchical scaling map for a tensor associated with a neural network, partition the tensor into a plurality of regions that respectively include one or more subregions based on the hierarchical scaling map, hierarchically scale numerical values of the tensor based on a first scale factor and second scale factor via the matrix accelerator circuitry, the first scale factor based on a statistical measure of a subregion of numerical values of within a region of the plurality of regions and the second scale factor based on a statistical measure of the region that includes the subregion, and generate a quantized representation of the tensor via quantization of hierarchically scaled numerical values.

SPACE EFFICIENT TRAINING FOR SEQUENCE TRANSDUCTION MACHINE LEARNING

NºPublicación:  EP4618073A2 17/09/2025
Solicitante: 
MICROSOFT TECHNOLOGY LICENSING LLC [US]
Microsoft Technology Licensing, LLC
EP_4618073_PA

Resumen de: EP4618073A2

Efficient training is provided for models comprising RNN-T (recurrent neural network transducers). The model transducers comprise an encoder, a decoder, and a fused joint network. The fused joint network receives encoding and decoding embeddings from the encoder and decoder. During training, the model stores the probability data for the next blank output and the next token at each time step rather than storing all probabilities for all possible outputs. This can significantly reduce requirements for memory storage, while still preserving the relevant information required to calculate the loss that will be backpropagated through the neural transducer during training to update the parameters of the neural transducer and to generate a trained or modified neural transducer. The computation of embeddings can also be divided into small slices and some of the utterance padding used for the training samples can also be removed to further reduce the memory storage requirements.

ARTIFICIAL INTELLIGENCE-GUIDED SCREENING OF UNDER-RECOGNIZED CARDIOMYOPATHIES ADAPTED FOR POINT-OF-CARE CARDIAC ULTRASOUND

NºPublicación:  WO2025189097A1 12/09/2025
Solicitante: 
YALE UNIV [US]
YALE UNIVERSITY
WO_2025189097_PA

Resumen de: WO2025189097A1

Provided herein are methods of training a model for cardiac phenotyping using cardiac ultrasonography images and videos adaptable to point-of-care acquisition. The method includes providing an echocardiogram dataset; labeling the echocardiogram dataset with at least one condition of interest; splitting the echocardiogram dataset into a derivation dataset and a testing dataset; initializing a deep neural network (DNN); automating the extraction of echocardiographic view quality metrics; generating natural and synthetic augmentations of cardiac images and videos; and implementing a loss function that accounts for variations in view quality to train noise-adjusted computer vision models for phenotyping at the point-of-care. Also provided herein is a method of cardiac phenotyping employing the model.

INTER-FRAME FEATURE MAP COMPRESSION FOR STATEFUL INFERENCE

NºPublicación:  US2025284541A1 11/09/2025
Solicitante: 
SNAP INC [US]
Snap Inc

Resumen de: US2025284541A1

Examples described herein relate to stateful inference of a neural network. A plurality of feature map segments each has a first set of values stored in a compressed manner. The first sets of values at least partially represent an extrinsic state memory of the neural network after processing of a previous input frame. Operations are performed with respect to each feature map segment. The operations include decompressing and storing the first set of values. The operations further include updating at least a subset of the decompressed first set of values based on a current input frame to obtain a second set of values. The second set of values is compressed and stored. Memory resources used to store the decompressed first set of values is released. The second sets of values at least partially represent the extrinsic state memory of the neural network after processing of the current input frame.

Iterative machine learning interatomic potential (MLIP) training methods

NºPublicación:  GB2639070A 10/09/2025
Solicitante: 
BOSCH GMBH ROBERT [DE]
Robert Bosch GmbH
GB_2639070_PA

Resumen de: GB2639070A

An iterative machine learning interatomic potential (MLIP) training method which includes training a first multiplicity of first MLIP models in a first iteration of a training loop; training a second multiplicity of second MLIP models in a second iteration of the training loop in parallel with the first training step; then combining the first MLIP models and the second MLIP models to create an iteratively trained MLIP configured to predict one or more values of a material. The values may be total energy, atomic forces, atomic stresses, atomic charges, and/or polarization. The MLIP may be a Gaussian Process (GP) based MLIP (e.g. FLARE). The MLIP may be a graph neural network (GNN) based MLIP (e.g. NequIP or Allegro). A third MLIP model may be used when predicted confidence or predicted uncertainty pass a threshold. The MLIP models may use different sets of hyperparameters. The first and second MLIP models may use different starting atomic structures or different chemical compositions. Iteration can involve selection of the model with the lowest error rate. Combination can be to account for atomic environment overlap or atomic changes in energies. Training may be terminated when a model is not near a Pareto front.

COORDINATION AND INCREASED UTILIZATION OF GRAPHICS PROCESSORS DURING INFERENCE

NºPublicación:  EP4614429A2 10/09/2025
Solicitante: 
INTEL CORP [US]
INTEL Corporation
EP_4614429_A2

Resumen de: EP4614429A2

An apparatus of embodiments, as described herein, includes a processing system including a graphics processor, the graphics processor including a plurality of processing resources for inference employment in neural networks, the plurality of processing resources configured to be partitioned into a plurality of physical resource slices; and a scheduler to receive specification of a limitation on usage of the plurality of processing resources by a plurality of application processes and schedule shared resources in the processing system for a plurality of application processes associated with a plurality of clients of the processing system according to the limitation on usage. The processing system has a capability to limit usage of the plurality of processing resources of the graphics processor by the plurality of application processes based on the specification of the limitation on usage. The limitation on usage of the plurality of processing resources of the graphics processor includes to limit execution of threads of each application process of the plurality of application processes to a specified portion of available threads provided by the plurality of processing resources of the graphics processor, the specified portion being less than all available threads provided by the plurality of processing resources of the graphics processor.

METHOD AND APPARATUS FOR GENERATING NEURAL NETWORK MODEL FOR EACH NODE OF NETWORK WITH HIERARCHICAL TREE STRUCTURE BASED ON DISTRIBUTED LEARNING FRAMEWORK

NºPublicación:  KR20250131852A 04/09/2025
Solicitante: 
고려대학교산학협력단울산과학기술원
KR_20250131852_PA

Resumen de: US2025272570A1

The present invention relates to a distributed optimization technique for learning a neural network model based on a distributed learning framework embedded in each node, for each node constituting a hierarchical tree-structured network.

End-To-End Graph Convolution Network

NºPublicación:  US2025278627A1 04/09/2025
Solicitante: 
NAVER CORP [KR]
NAVER CORPORATION
US_2021319314_A1

Resumen de: US2025278627A1

A natural language sentence includes a sequence of tokens. A system for entering information provided in the natural language sentence to a computing device includes a processor and memory coupled to the processor, the memory including instructions executable by the processor implementing: a contextualization layer configured to generate a contextualized representation of the sequence of tokens; a dimension-preserving convolutional neural network configured to generate an output matrix from the contextualized representation; and a graph convolutional neural network configured to: use the matrix to form a set of adjacency matrices; and generate a label for each token in the sequence of tokens based on hidden states for that token in a last layer of the graph convolutional neural network.

PREDICTING DOCUMENT IMPACT USING A MACHINE LEARNING MODEL

NºPublicación:  US2025278667A1 04/09/2025
Solicitante: 
MICROSOFT TECHNOLOGY LICENSING LLC [US]
Microsoft Technology Licensing, LLC
US_2025278667_PA

Resumen de: US2025278667A1

This document relates to predicting the impact of documents using a trained machine learning model. For instance, the disclosed implementations can train a gradient-boosted decision tree or neural network to predict impact scores of previously-published documents using features such as author features, journal features, document metadata features, and/or text embeddings representing text from the previously-published documents. Once trained, the machine learning model can be employed to predict impact scores of newly-published documents. The impact scores can be employed for operations such as ranking the newly-published documents in response to a received query.

EXTRACTION AND CLASSIFICATION OF AUDIO EVENTS IN GAMING SYSTEMS

NºPublicación:  US2025276240A1 04/09/2025
Solicitante: 
STEELSERIES APS [DK]
STEELSERIES ApS
US_2025177855_PA

Resumen de: US2025276240A1

A system that incorporates the subject disclosure may include, for example, receiving an input audio stream from a gaming system, the input audio stream including gaming audio of a video game played by a game player, the input audio stream including a plurality of classes of sounds, providing the input audio stream to a neural network, extracting, by the neural network, sounds of a selected class of sounds of the plurality of classes of sounds, and providing a plurality of output audio streams including providing a first audio stream including the sounds of the selected class of sounds of the input audio stream and a second audio stream including remaining sounds of the input audio stream. Additional embodiments are disclosed.

METHOD FOR OPTIMIZING THE COMPUTATIONAL RESOURCES OF A DEEP NEURAL NETWORK

NºPublicación:  WO2025181663A1 04/09/2025
Solicitante: 
POLITECNICO DI TORINO [IT]
POLITECNICO DI TORINO
WO_2025181663_PA

Resumen de: WO2025181663A1

Described herein is a computer-implemented method (100) for selecting an architecture of a deep neural network by means of an optimization algorithm comprising a backbone-type macro-component, an encoder-type macro-component, and a decoder-type macro-component, said deep neural network being adapted to execute a computer vision task on a target dataset while fulfilling at least one processing requirement representative of a constraint of available resources on which said selected deep neural network executes said computer vision task, the method comprising the steps of: - receiving (110) said at least one processing requirement representative of a constraint of an available resource of the system; - receiving (120) efficiency data representative of macro-components adapted to define architectures of deep neural networks capable of executing a computer vision task, each one of said macro-components being of the backbone type, or of the encoder type, or of the decoder type; - selecting (130), within a search space comprising a plurality of backbone-type, encoder-type or decoder-type macro-components, a deep neural network architecture suitable for executing said computer vision task which has the highest validation score among all the examined candidate deep neural networks, said validation score being computed on the basis of a metric considering both said at least one processing requirement based on said efficiency data and an accuracy value for each one of said examined cand

GRAPH NEURAL NETWORK TRAINING METHOD AND SYSTEM, AND ABNORMAL ACCOUNT IDENTIFICATION METHOD

NºPublicación:  EP4610887A1 03/09/2025
Solicitante: 
BEIJING VOLCANO ENGINE TECHNOLOGY CO LTD [CN]
Beijing Volcano Engine Technology Co., Ltd
EP_4610887_PA

Resumen de: EP4610887A1

The present disclosure provides a method and system for training a graph neural network and a method of identifying an abnormal account. The method of training a graph neural network includes: obtaining initial graph structure data corresponding to the terminal device, the initial graph structure data respectively obtained by the plurality of distributed training terminals being derived from the same sample graph structure data; and performing the following graph structure data processing stage and graph neural network training stage cyclically, until a target neural network satisfying a training requirement is obtained: determining a processing opportunity for currently performing a graph structure data processing stage based on historical execution data of historically performing a graph structure data processing stage and a graph neural network training stage; performing, based on the processing opportunity, graph structure data processing on the initial graph structure data in the graph structure data processing stage, to generate target graph structure data; the graph structure data processing comprising data sampling processing and feature extraction processing; and training, based on the target graph structure data, the target neural network in the graph neural network training stage.

METHOD AND SYSTEM OF COMPRESSING DEEP LEARNING MODELS

NºPublicación:  EP4610881A1 03/09/2025
Solicitante: 
L & T TECH SERVICES LIMITED [IN]
L & T Technology Services Limited
EP_4610881_PA

Resumen de: EP4610881A1

A method (600) and system (100) of compressing a first deep learning (DL) model is disclosed. A processor receives a verified DL model. The verified DL model is converted into a standard DL model based on a framework corresponding to a plurality of provisional compression types. A compression strategy is selected from a plurality of compression strategies using a neural network (NN) based on determining a compression feature vector based on a knowledge graph. A concatenated vector is determined based on a model feature vector, a dataset feature vector and compression feature vector. The NN is trained based on the concatenated vector. A bias of the NN is trained based on a model score corresponding to the standard NN. A compression embedding is determined corresponding to the selected compression strategy.

METHOD AND DEVICE FOR CONTROLLING INFERENCE TASK EXECUTION THROUGH SPLIT INFERENCE OF ARTIFICIAL NEURAL NETWORK

NºPublicación:  EP4610890A1 03/09/2025
Solicitante: 
SAMSUNG ELECTRONICS CO LTD [KR]
Samsung Electronics Co., Ltd
EP_4610890_PA

Resumen de: EP4610890A1

Provided are a method and device for controlling inference task execution through split inference of an artificial neural network. The method includes determining one policy from among a plurality of task execution policies based on at least one of requirements of the inference task and a correction index, wherein the correction index indicates a failure rate of each task execution policy, determining, based on the policy, one or more devices to execute split inference of the artificial neural network, updating the correction index corresponding to the policy based on a result of whether the split inference executed by the one or more devices has failed, and updating the policy by using execution records of the split inference obtained from the one or more devices, wherein the execution records include information on a cause of failure of the split inference. Also, the method of controlling execution of an inference task through split inference of an artificial neural network of the electronic device may be performed using an artificial intelligence model.

METHOD AND/OR APPARATUS FOR ARCHITECTURE SEARCH

NºPublicación:  US2025272576A1 28/08/2025
Solicitante: 
BOSCH GMBH ROBERT [DE]
Robert Bosch GmbH
US_2025272576_PA

Resumen de: US2025272576A1

A method for an architecture search of architecture of a one-shot neural network in order to solve a multi-task problem depending on at least one piece of target hardware.

Systems and Methods for Optimized Multi-Agent Routing Between Nodes

NºPublicación:  US2025269878A1 28/08/2025
Solicitante: 
AURORA OPERATIONS INC [US]
Aurora Operations, Inc
US_2021248460_A1

Resumen de: US2025269878A1

Systems and methods described herein can provide for: obtaining, from a remote autonomous vehicle computing system, an incoming communication vector descriptive of a local environmental condition of a remote autonomous vehicle; inputting the incoming communication vector into a value iteration graph neural network of an autonomous vehicle; generating, by the value iteration graph neural network, transportation segment navigation instructions identifying a target transportation segment to navigate the autonomous vehicle to; generating a motion plan through an environment of the autonomous vehicle; and controlling the autonomous vehicle by one or more vehicle control systems based on the motion plan.

USING AUTOMATICALLY UNCOVERED FAILURE CASES TO IMPROVE THE PERFORMANCE OF NEURAL NETWORKS

Nº publicación: US2025273001A1 28/08/2025

Solicitante:

GDM HOLDING LLC [US]
GDM Holding LLC

JP_2025020137_PA

Resumen de: US2025273001A1

Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for adjusting a target neural network using automatically generated test cases before deployment of the target neural network in a deployment environment. One of the methods may include generating a plurality of test inputs by using a test case generation neural network; processing the plurality of test inputs using a target neural network to generate one or more test outputs for each test input; and identifying, from the one or more test outputs generated by the target neural network for each test input, failing test inputs that result in generation of test outputs by the target neural network that fail one or more criteria.

traducir