Ministerio de Industria, Turismo y Comercio LogoMinisterior
 

Alerta

Resultados 19 resultados
LastUpdate Última actualización 15/09/2025 [07:28:00]
pdfxls
Solicitudes publicadas en los últimos 30 días / Applications published in the last 30 days
Resultados 1 a 19  

ARTIFICIAL INTELLIGENCE-GUIDED SCREENING OF UNDER-RECOGNIZED CARDIOMYOPATHIES ADAPTED FOR POINT-OF-CARE CARDIAC ULTRASOUND

NºPublicación:  WO2025189097A1 12/09/2025
Solicitante: 
YALE UNIV [US]
YALE UNIVERSITY
WO_2025189097_PA

Resumen de: WO2025189097A1

Provided herein are methods of training a model for cardiac phenotyping using cardiac ultrasonography images and videos adaptable to point-of-care acquisition. The method includes providing an echocardiogram dataset; labeling the echocardiogram dataset with at least one condition of interest; splitting the echocardiogram dataset into a derivation dataset and a testing dataset; initializing a deep neural network (DNN); automating the extraction of echocardiographic view quality metrics; generating natural and synthetic augmentations of cardiac images and videos; and implementing a loss function that accounts for variations in view quality to train noise-adjusted computer vision models for phenotyping at the point-of-care. Also provided herein is a method of cardiac phenotyping employing the model.

INTER-FRAME FEATURE MAP COMPRESSION FOR STATEFUL INFERENCE

NºPublicación:  US2025284541A1 11/09/2025
Solicitante: 
SNAP INC [US]
Snap Inc

Resumen de: US2025284541A1

Examples described herein relate to stateful inference of a neural network. A plurality of feature map segments each has a first set of values stored in a compressed manner. The first sets of values at least partially represent an extrinsic state memory of the neural network after processing of a previous input frame. Operations are performed with respect to each feature map segment. The operations include decompressing and storing the first set of values. The operations further include updating at least a subset of the decompressed first set of values based on a current input frame to obtain a second set of values. The second set of values is compressed and stored. Memory resources used to store the decompressed first set of values is released. The second sets of values at least partially represent the extrinsic state memory of the neural network after processing of the current input frame.

Iterative machine learning interatomic potential (MLIP) training methods

NºPublicación:  GB2639070A 10/09/2025
Solicitante: 
BOSCH GMBH ROBERT [DE]
Robert Bosch GmbH
GB_2639070_PA

Resumen de: GB2639070A

An iterative machine learning interatomic potential (MLIP) training method which includes training a first multiplicity of first MLIP models in a first iteration of a training loop; training a second multiplicity of second MLIP models in a second iteration of the training loop in parallel with the first training step; then combining the first MLIP models and the second MLIP models to create an iteratively trained MLIP configured to predict one or more values of a material. The values may be total energy, atomic forces, atomic stresses, atomic charges, and/or polarization. The MLIP may be a Gaussian Process (GP) based MLIP (e.g. FLARE). The MLIP may be a graph neural network (GNN) based MLIP (e.g. NequIP or Allegro). A third MLIP model may be used when predicted confidence or predicted uncertainty pass a threshold. The MLIP models may use different sets of hyperparameters. The first and second MLIP models may use different starting atomic structures or different chemical compositions. Iteration can involve selection of the model with the lowest error rate. Combination can be to account for atomic environment overlap or atomic changes in energies. Training may be terminated when a model is not near a Pareto front.

COORDINATION AND INCREASED UTILIZATION OF GRAPHICS PROCESSORS DURING INFERENCE

NºPublicación:  EP4614429A2 10/09/2025
Solicitante: 
INTEL CORP [US]
INTEL Corporation
EP_4614429_A2

Resumen de: EP4614429A2

An apparatus of embodiments, as described herein, includes a processing system including a graphics processor, the graphics processor including a plurality of processing resources for inference employment in neural networks, the plurality of processing resources configured to be partitioned into a plurality of physical resource slices; and a scheduler to receive specification of a limitation on usage of the plurality of processing resources by a plurality of application processes and schedule shared resources in the processing system for a plurality of application processes associated with a plurality of clients of the processing system according to the limitation on usage. The processing system has a capability to limit usage of the plurality of processing resources of the graphics processor by the plurality of application processes based on the specification of the limitation on usage. The limitation on usage of the plurality of processing resources of the graphics processor includes to limit execution of threads of each application process of the plurality of application processes to a specified portion of available threads provided by the plurality of processing resources of the graphics processor, the specified portion being less than all available threads provided by the plurality of processing resources of the graphics processor.

EXTRACTION AND CLASSIFICATION OF AUDIO EVENTS IN GAMING SYSTEMS

NºPublicación:  US2025276240A1 04/09/2025
Solicitante: 
STEELSERIES APS [DK]
STEELSERIES ApS
US_2025177855_PA

Resumen de: US2025276240A1

A system that incorporates the subject disclosure may include, for example, receiving an input audio stream from a gaming system, the input audio stream including gaming audio of a video game played by a game player, the input audio stream including a plurality of classes of sounds, providing the input audio stream to a neural network, extracting, by the neural network, sounds of a selected class of sounds of the plurality of classes of sounds, and providing a plurality of output audio streams including providing a first audio stream including the sounds of the selected class of sounds of the input audio stream and a second audio stream including remaining sounds of the input audio stream. Additional embodiments are disclosed.

End-To-End Graph Convolution Network

NºPublicación:  US2025278627A1 04/09/2025
Solicitante: 
NAVER CORP [KR]
NAVER CORPORATION
US_2021319314_A1

Resumen de: US2025278627A1

A natural language sentence includes a sequence of tokens. A system for entering information provided in the natural language sentence to a computing device includes a processor and memory coupled to the processor, the memory including instructions executable by the processor implementing: a contextualization layer configured to generate a contextualized representation of the sequence of tokens; a dimension-preserving convolutional neural network configured to generate an output matrix from the contextualized representation; and a graph convolutional neural network configured to: use the matrix to form a set of adjacency matrices; and generate a label for each token in the sequence of tokens based on hidden states for that token in a last layer of the graph convolutional neural network.

METHOD FOR OPTIMIZING THE COMPUTATIONAL RESOURCES OF A DEEP NEURAL NETWORK

NºPublicación:  WO2025181663A1 04/09/2025
Solicitante: 
POLITECNICO DI TORINO [IT]
POLITECNICO DI TORINO
WO_2025181663_PA

Resumen de: WO2025181663A1

Described herein is a computer-implemented method (100) for selecting an architecture of a deep neural network by means of an optimization algorithm comprising a backbone-type macro-component, an encoder-type macro-component, and a decoder-type macro-component, said deep neural network being adapted to execute a computer vision task on a target dataset while fulfilling at least one processing requirement representative of a constraint of available resources on which said selected deep neural network executes said computer vision task, the method comprising the steps of: - receiving (110) said at least one processing requirement representative of a constraint of an available resource of the system; - receiving (120) efficiency data representative of macro-components adapted to define architectures of deep neural networks capable of executing a computer vision task, each one of said macro-components being of the backbone type, or of the encoder type, or of the decoder type; - selecting (130), within a search space comprising a plurality of backbone-type, encoder-type or decoder-type macro-components, a deep neural network architecture suitable for executing said computer vision task which has the highest validation score among all the examined candidate deep neural networks, said validation score being computed on the basis of a metric considering both said at least one processing requirement based on said efficiency data and an accuracy value for each one of said examined cand

METHOD AND DEVICE FOR CONTROLLING INFERENCE TASK EXECUTION THROUGH SPLIT INFERENCE OF ARTIFICIAL NEURAL NETWORK

NºPublicación:  EP4610890A1 03/09/2025
Solicitante: 
SAMSUNG ELECTRONICS CO LTD [KR]
Samsung Electronics Co., Ltd
EP_4610890_PA

Resumen de: EP4610890A1

Provided are a method and device for controlling inference task execution through split inference of an artificial neural network. The method includes determining one policy from among a plurality of task execution policies based on at least one of requirements of the inference task and a correction index, wherein the correction index indicates a failure rate of each task execution policy, determining, based on the policy, one or more devices to execute split inference of the artificial neural network, updating the correction index corresponding to the policy based on a result of whether the split inference executed by the one or more devices has failed, and updating the policy by using execution records of the split inference obtained from the one or more devices, wherein the execution records include information on a cause of failure of the split inference. Also, the method of controlling execution of an inference task through split inference of an artificial neural network of the electronic device may be performed using an artificial intelligence model.

GRAPH NEURAL NETWORK TRAINING METHOD AND SYSTEM, AND ABNORMAL ACCOUNT IDENTIFICATION METHOD

NºPublicación:  EP4610887A1 03/09/2025
Solicitante: 
BEIJING VOLCANO ENGINE TECHNOLOGY CO LTD [CN]
Beijing Volcano Engine Technology Co., Ltd
EP_4610887_PA

Resumen de: EP4610887A1

The present disclosure provides a method and system for training a graph neural network and a method of identifying an abnormal account. The method of training a graph neural network includes: obtaining initial graph structure data corresponding to the terminal device, the initial graph structure data respectively obtained by the plurality of distributed training terminals being derived from the same sample graph structure data; and performing the following graph structure data processing stage and graph neural network training stage cyclically, until a target neural network satisfying a training requirement is obtained: determining a processing opportunity for currently performing a graph structure data processing stage based on historical execution data of historically performing a graph structure data processing stage and a graph neural network training stage; performing, based on the processing opportunity, graph structure data processing on the initial graph structure data in the graph structure data processing stage, to generate target graph structure data; the graph structure data processing comprising data sampling processing and feature extraction processing; and training, based on the target graph structure data, the target neural network in the graph neural network training stage.

METHOD AND SYSTEM OF COMPRESSING DEEP LEARNING MODELS

NºPublicación:  EP4610881A1 03/09/2025
Solicitante: 
L & T TECH SERVICES LIMITED [IN]
L & T Technology Services Limited
EP_4610881_PA

Resumen de: EP4610881A1

A method (600) and system (100) of compressing a first deep learning (DL) model is disclosed. A processor receives a verified DL model. The verified DL model is converted into a standard DL model based on a framework corresponding to a plurality of provisional compression types. A compression strategy is selected from a plurality of compression strategies using a neural network (NN) based on determining a compression feature vector based on a knowledge graph. A concatenated vector is determined based on a model feature vector, a dataset feature vector and compression feature vector. The NN is trained based on the concatenated vector. A bias of the NN is trained based on a model score corresponding to the standard NN. A compression embedding is determined corresponding to the selected compression strategy.

USING AUTOMATICALLY UNCOVERED FAILURE CASES TO IMPROVE THE PERFORMANCE OF NEURAL NETWORKS

NºPublicación:  US2025273001A1 28/08/2025
Solicitante: 
GDM HOLDING LLC [US]
GDM Holding LLC
JP_2025020137_PA

Resumen de: US2025273001A1

Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for adjusting a target neural network using automatically generated test cases before deployment of the target neural network in a deployment environment. One of the methods may include generating a plurality of test inputs by using a test case generation neural network; processing the plurality of test inputs using a target neural network to generate one or more test outputs for each test input; and identifying, from the one or more test outputs generated by the target neural network for each test input, failing test inputs that result in generation of test outputs by the target neural network that fail one or more criteria.

METHOD AND/OR APPARATUS FOR ARCHITECTURE SEARCH

NºPublicación:  US2025272576A1 28/08/2025
Solicitante: 
BOSCH GMBH ROBERT [DE]
Robert Bosch GmbH
US_2025272576_PA

Resumen de: US2025272576A1

A method for an architecture search of architecture of a one-shot neural network in order to solve a multi-task problem depending on at least one piece of target hardware.

METHOD AND DEVICE FOR CONTROLLING INFERENCE TASK EXECUTION THROUGH SPLIT INFERENCE OF ARTIFICIAL NEURAL NETWORK

NºPublicación:  US2025272114A1 28/08/2025
Solicitante: 
SAMSUNG ELECTRONICS CO LTD [KR]
SAMSUNG ELECTRONICS CO., LTD
US_2025272114_PA

Resumen de: US2025272114A1

A method of controlling execution of an inference task includes determining a task execution policy based on requirements of the inference task or a correction index, wherein the correction index is determined based on failure rates of task execution policies; determining, based on the task execution policy, devices to execute split inference; obtaining an updated correction index corresponding to the task execution policy, based on result information indicating the split inference has failed; and obtaining an updated task execution policy based on execution records of the first split inference obtained from the one or more first devices, wherein the execution records include failure cause information of the first split inference, wherein a task execution policy from among the first plurality of task execution policies includes a priority of device conditions for selecting a device to execute the split inference, or a number of devices used for the split inference.

METHOD AND SYSTEM OF COMPRESSING DEEP LEARNING MODELS

NºPublicación:  US2025272561A1 28/08/2025
Solicitante: 
L&T TECH SERVICES LIMITED [IN]
L&T TECHNOLOGY SERVICES LIMITED

Resumen de: US2025272561A1

A method and system of compressing a first deep learning (DL) model is disclosed. A processor receives a verified DL model. The verified DL model is converted into a standard DL model based on a framework corresponding to a plurality of provisional compression types. A compression strategy is selected from a plurality of compression strategies using a neural network (NN) based on determining a compression feature vector based on a knowledge graph. A concatenated vector is determined based on a model feature vector, a dataset feature vector and compression feature vector. The NN is trained based on the concatenated vector. A bias of the NN is trained based on a model score corresponding to the standard NN. A compression embedding is determined corresponding to the selected compression strategy.

PARTIAL-ACTIVATION OF NEURAL NETWORK BASED ON HEAT-MAP OF NEURAL NETWORK ACTIVITY

NºPublicación:  US2025272563A1 28/08/2025
Solicitante: 
NANO DIMENSION TECH LTD [IL]
NANO DIMENSION TECHNOLOGIES, LTD
JP_2022013765_A

Resumen de: US2025272563A1

A device, system, and method for training or prediction of a neural network. A current value may be stored for each of a plurality of synapses or filters in the neural network. A historical metric of activity may be independently determined for each individual or group of the synapses or filters during one or more past iterations. A plurality of partial activations of the neural network may be iteratively executed. Each partial-activation iteration may activate a subset of the plurality of synapses or filters in the neural network. Each individual or group of synapses or filters may be activated in a portion of a total number of iterations proportional to the historical metric of activity independently determined for that individual or group of synapses or filters. Training or prediction of the neural network may be performed based on the plurality of partial activations of the neural network.

Systems and Methods for Optimized Multi-Agent Routing Between Nodes

NºPublicación:  US2025269878A1 28/08/2025
Solicitante: 
AURORA OPERATIONS INC [US]
Aurora Operations, Inc
US_2021248460_A1

Resumen de: US2025269878A1

Systems and methods described herein can provide for: obtaining, from a remote autonomous vehicle computing system, an incoming communication vector descriptive of a local environmental condition of a remote autonomous vehicle; inputting the incoming communication vector into a value iteration graph neural network of an autonomous vehicle; generating, by the value iteration graph neural network, transportation segment navigation instructions identifying a target transportation segment to navigate the autonomous vehicle to; generating a motion plan through an environment of the autonomous vehicle; and controlling the autonomous vehicle by one or more vehicle control systems based on the motion plan.

FLEXIBILE ENTITY RESOLUTION NETWORKS

NºPublicación:  WO2025175313A1 21/08/2025
Solicitante: 
RELTIO INC [US]
RELTIO, INC
WO_2025175313_PA

Resumen de: WO2025175313A1

In various embodiments, a computing system is configured to provide a multi-stage cascade of large language models and stage N neural networks that identifies matching data records within a set of data records and then merges the matching data records. More specifically, the computing system can use a combination of domain-agnostic large language models and downstream neural network classifiers to identify matching data records that would otherwise not be possible with other machine learning or rules-based entity resolution systems. In one example, a computing system receives an entity resolution request. The entity resolution request can indicate a first entity and a second entity. For example, a data steward may provide the entity resolution request to help determine whether the entities are the same or different.

KNOWLEDGE GRAPH CREATION AND USE

NºPublicación:  US2025265477A1 21/08/2025
Solicitante: 
MICROSOFT TECHNOLOGY LICENSING LLC [US]
MICROSOFT TECHNOLOGY LICENSING, LLC
US_2025265477_PA

Resumen de: US2025265477A1

This disclosure introduces a novel method and system for making a knowledge graph. A relation classification model is used to classify relationships between entities found in natural-language text. These entities become the nodes and the relationships between them become the edges in the knowledge graph. The entities are specific items that are relevant to the subject matter of the natural-language text. If the natural language documents are biomedical texts, the entities may be things such as chemicals, diseases, and genes. The relation classification model uses a transformer-based deep neural network architecture to understand the meaning of the text that contains the entities. The relation classification model also includes a classification layer that classifies the type of relationship between the entities found in the texts. With the knowledge graph, a user can efficiently receive answers to questions based on the aggregate knowledge found in many different documents.

GENERATING AUDIO USING AUTO-REGRESSIVE GENERATIVE NEURAL NETWORKS

Nº publicación: US2025266035A1 21/08/2025

Solicitante:

GOOGLE LLC [US]
Google LLC

US_2025266035_PA

Resumen de: US2025266035A1

Methods, systems, and apparatus, including computer programs encoded on computer storage media, for generating a prediction of an audio signal. One of the methods includes receiving a request to generate an audio signal conditioned on an input; processing the input using an embedding neural network to map the input to one or more embedding tokens; generating a semantic representation of the audio signal; generating, using one or more generative neural networks and conditioned on at least the semantic representation and the embedding tokens, an acoustic representation of the audio signal; and processing at least the acoustic representation using a decoder neural network to generate the prediction of the audio signal.

traducir