Ministerio de Industria, Turismo y Comercio LogoMinisterior
 

Alerta

Resultados 26 resultados
LastUpdate Última actualización 03/09/2025 [07:26:00]
pdfxls
Solicitudes publicadas en los últimos 30 días / Applications published in the last 30 days
Resultados 1 a 25 de 26 nextPage  

PARTIAL-ACTIVATION OF NEURAL NETWORK BASED ON HEAT-MAP OF NEURAL NETWORK ACTIVITY

NºPublicación:  US2025272563A1 28/08/2025
Solicitante: 
NANO DIMENSION TECH LTD [IL]
NANO DIMENSION TECHNOLOGIES, LTD
JP_2022013765_A

Resumen de: US2025272563A1

A device, system, and method for training or prediction of a neural network. A current value may be stored for each of a plurality of synapses or filters in the neural network. A historical metric of activity may be independently determined for each individual or group of the synapses or filters during one or more past iterations. A plurality of partial activations of the neural network may be iteratively executed. Each partial-activation iteration may activate a subset of the plurality of synapses or filters in the neural network. Each individual or group of synapses or filters may be activated in a portion of a total number of iterations proportional to the historical metric of activity independently determined for that individual or group of synapses or filters. Training or prediction of the neural network may be performed based on the plurality of partial activations of the neural network.

Systems and Methods for Optimized Multi-Agent Routing Between Nodes

NºPublicación:  US2025269878A1 28/08/2025
Solicitante: 
AURORA OPERATIONS INC [US]
Aurora Operations, Inc
US_2021248460_A1

Resumen de: US2025269878A1

Systems and methods described herein can provide for: obtaining, from a remote autonomous vehicle computing system, an incoming communication vector descriptive of a local environmental condition of a remote autonomous vehicle; inputting the incoming communication vector into a value iteration graph neural network of an autonomous vehicle; generating, by the value iteration graph neural network, transportation segment navigation instructions identifying a target transportation segment to navigate the autonomous vehicle to; generating a motion plan through an environment of the autonomous vehicle; and controlling the autonomous vehicle by one or more vehicle control systems based on the motion plan.

USING AUTOMATICALLY UNCOVERED FAILURE CASES TO IMPROVE THE PERFORMANCE OF NEURAL NETWORKS

NºPublicación:  US2025273001A1 28/08/2025
Solicitante: 
GDM HOLDING LLC [US]
GDM Holding LLC
JP_2025020137_PA

Resumen de: US2025273001A1

Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for adjusting a target neural network using automatically generated test cases before deployment of the target neural network in a deployment environment. One of the methods may include generating a plurality of test inputs by using a test case generation neural network; processing the plurality of test inputs using a target neural network to generate one or more test outputs for each test input; and identifying, from the one or more test outputs generated by the target neural network for each test input, failing test inputs that result in generation of test outputs by the target neural network that fail one or more criteria.

METHOD AND SYSTEM OF COMPRESSING DEEP LEARNING MODELS

NºPublicación:  US2025272561A1 28/08/2025
Solicitante: 
L&T TECH SERVICES LIMITED [IN]
L&T TECHNOLOGY SERVICES LIMITED

Resumen de: US2025272561A1

A method and system of compressing a first deep learning (DL) model is disclosed. A processor receives a verified DL model. The verified DL model is converted into a standard DL model based on a framework corresponding to a plurality of provisional compression types. A compression strategy is selected from a plurality of compression strategies using a neural network (NN) based on determining a compression feature vector based on a knowledge graph. A concatenated vector is determined based on a model feature vector, a dataset feature vector and compression feature vector. The NN is trained based on the concatenated vector. A bias of the NN is trained based on a model score corresponding to the standard NN. A compression embedding is determined corresponding to the selected compression strategy.

FLEXIBILE ENTITY RESOLUTION NETWORKS

NºPublicación:  WO2025175313A1 21/08/2025
Solicitante: 
RELTIO INC [US]
RELTIO, INC
WO_2025175313_PA

Resumen de: WO2025175313A1

In various embodiments, a computing system is configured to provide a multi-stage cascade of large language models and stage N neural networks that identifies matching data records within a set of data records and then merges the matching data records. More specifically, the computing system can use a combination of domain-agnostic large language models and downstream neural network classifiers to identify matching data records that would otherwise not be possible with other machine learning or rules-based entity resolution systems. In one example, a computing system receives an entity resolution request. The entity resolution request can indicate a first entity and a second entity. For example, a data steward may provide the entity resolution request to help determine whether the entities are the same or different.

GENERATING AUDIO USING AUTO-REGRESSIVE GENERATIVE NEURAL NETWORKS

NºPublicación:  US2025266035A1 21/08/2025
Solicitante: 
GOOGLE LLC [US]
Google LLC
US_2025266035_PA

Resumen de: US2025266035A1

Methods, systems, and apparatus, including computer programs encoded on computer storage media, for generating a prediction of an audio signal. One of the methods includes receiving a request to generate an audio signal conditioned on an input; processing the input using an embedding neural network to map the input to one or more embedding tokens; generating a semantic representation of the audio signal; generating, using one or more generative neural networks and conditioned on at least the semantic representation and the embedding tokens, an acoustic representation of the audio signal; and processing at least the acoustic representation using a decoder neural network to generate the prediction of the audio signal.

GRAPH NEURAL NETWORK TRAINING USING USER SKILLS

NºPublicación:  US2025259037A1 14/08/2025
Solicitante: 
MICROSOFT TECHNOLOGY LICENSING LLC [US]
Microsoft Technology Licensing, LLC
US_2025259037_PA

Resumen de: US2025259037A1

Methods, systems, and apparatuses include training a graph neural network. Content items are received for a user of an online system, the content items including skills. An input graph is generated using the content items, the input graph including nodes and edges linking the nodes. The input graph is sampled using a source node and skills to generate a first computational graph. The input graph is sampled using a target node and skills to generate a second computational graph. A source node embedding is generated by encoding the first computational graph. A target node embedding is generated by encoding the second computational graph. A prediction score is calculated by decoding the source node embedding and the target node embedding. Weights of the graph neural network are updated using the prediction score.

METHOD FOR GPU MEMORY MANAGEMENT FOR DEEP NEURAL NETWORK AND COMPUTING DEVICE FOR PERFORMING SAME

NºPublicación:  US2025259058A1 14/08/2025
Solicitante: 
MOREH CORP [KR]
MOREH CORP
US_2025259058_PA

Resumen de: US2025259058A1

Embodiments disclosed herein relate to a method for GPU memory management that observes the deep learning of a deep neural network performed by a GPU and reduces the amount of GPU memory used, thereby overcoming limitations attributable to the memory size of the GPU and allowing the more effective performance of the deep learning, and a computing device for performing the same. According to an embodiment, there is disclosed a method for GPU memory management for a deep neural network, the method being performed by a computing device including a GPU and a CPU, the method including: generating a schedule for GPU memory management based on the processing of a unit operation, included in the deep neural network, by the GPU; and moving data required for deep learning of the deep neural network between GPU memory and CPU memory based on the schedule.

NEURAL TAXONOMY EXPANDER

NºPublicación:  US2025259081A1 14/08/2025
Solicitante: 
PINTEREST INC [US]
Pinterest, Inc
US_2025259081_PA

Resumen de: US2025259081A1

Systems and methods for automatically placing a taxonomy candidate within an existing taxonomy are presented. More particularly, a neural taxonomy expander (a neural network model) is trained according to the existing, curated taxonomic hierarchy. Moreover, for each node in the taxonomic hierarchy, an embedding vector is generated. A taxonomy candidate is received, where the candidate is to be placed within the existing taxonomy. An embedding vector is generated for the candidate and projected by a projection function of the neural taxonomy expander into the taxonomic hyperspace. A set of closest neighbors to the projected embedding vector of the taxonomy candidate is identified and the closest neighbor of the set is assumed as the parent for the taxonomy candidate. The taxonomy candidate is added to the existing taxonomic hierarchy as a child to the identified parent node.

SYSTEMS AND METHODS FOR SHAPE OPTIMIZATION OF STRUCTURES USING PHYSICS INFORMED NEURAL NETWORKS

NºPublicación:  US2025259062A1 14/08/2025
Solicitante: 
MITSUBISHI ELECTRIC RES LABORATORIES INC [US]
Mitsubishi Electric Research Laboratories, Inc
US_2025259062_PA

Resumen de: US2025259062A1

A method for training a shape optimization neural network to produce an optimized point cloud defining desired shapes of materials with given properties is provided. The method comprises collecting a subject point cloud including points identified by their initial coordinates and material properties and jointly training a first neural network to iteratively modify a shape boundary by changing coordinates of a set of points in the subject point cloud to maximize an objective function and a second neural network to solve for physical fields by satisfying partial differential equations imposed by physics of the different materials of the subject point cloud having a shape produced by the changed coordinates output by the first neural network. The method also comprises outputting optimized coordinates of the set of points in the subject point cloud, produced by the trained first neural network.

MACHINE-LEARNING-BASED MEAL DETECTION AND SIZE ESTIMATION USING CONTINUOUS GLUCOSE MONITORING (CGM) AND INSULIN DATA

NºPublicación:  US2025259727A1 14/08/2025
Solicitante: 
OREGON HEALTH & SCIENCE UNIV [US]
Oregon Health & Science University
US_2025259727_PA

Resumen de: US2025259727A1

Disclosed is a meal detection and meal size estimation machine learning technology. In some embodiments, the techniques entail applying to a trained multioutput neural network model a set of input features, the set of input features representing glucoregulatory management data, insulin on board, and time of day, the trained multioutput neural network model representing multiple fully connected layers and an output layer formed from first and second branches, the first branch providing a meal detection output and the second branch providing a carbohydrate estimation output; receiving from the meal detection output a meal detection indication; and receiving from the carbohydrate estimation output a meal size estimation.

SYSTEMS AND METHODS FOR SHAPE OPTIMIZATION OF STRUCTURES USING PHYSICS INFORMED NEURAL NETWORKS

NºPublicación:  WO2025169641A1 14/08/2025
Solicitante: 
MITSUBISHI ELECTRIC CORP [JP]
MITSUBISHI ELECTRIC CORPORATION
WO_2025169641_PA

Resumen de: WO2025169641A1

A method for training a shape optimization neural network to produce an optimized point cloud defining desired shapes of materials with given properties is provided. The method comprises collecting a subject point cloud including points identified by their initial coordinates and material properties and jointly training a first neural network to iteratively modify a shape boundary by changing coordinates of a set of points in the subject point cloud to maximize an objective function and a second neural network to solve for physical fields by satisfying partial differential equations imposed by physics of the different materials of the subject point cloud having a shape produced by the changed coordinates output by the first neural network. The method also comprises outputting optimized coordinates of the set of points in the subject point cloud, produced by the trained first neural network.

EXPLOITING INPUT DATA SPARSITY IN NEURAL NETWORK COMPUTE UNITS

NºPublicación:  US2025258784A1 14/08/2025
Solicitante: 
GOOGLE LLC [US]
Google LLC
US_2025258784_PA

Resumen de: US2025258784A1

A computer-implemented method includes receiving, by a computing device, input activations and determining, by a controller of the computing device, whether each of the input activations has either a zero value or a non-zero value. The method further includes storing, in a memory bank of the computing device, at least one of the input activations. Storing the at least one input activation includes generating an index comprising one or more memory address locations that have input activation values that are non-zero values. The method still further includes providing, by the controller and from the memory bank, at least one input activation onto a data bus that is accessible by one or more units of a computational array. The activations are provided, at least in part, from a memory address location associated with the index.

IMPROVED PEAK DETECTION

NºPublicación:  WO2025162867A1 07/08/2025
Solicitante: 
SENTEA [BE]
SENTEA
WO_2025162867_PA

Resumen de: WO2025162867A1

The invention provides, amongst other aspects, a method for detecting peaks in an optical spectrum, the method comprising the steps of obtaining the optical spectrum from at least one optical spectrometer, the optical spectrum comprising a wavelength range; and applying a trained neural network, NN, on the optical spectrum to detect the peaks in the optical spectrum, wherein the detecting of peaks relates to output nodes of the NN having been trained w.r.t zones corresponding to subranges of the wavelength range. Further provided is a device carrying out the method, the device preferably comprising a memory including the trained NN. Further provided is a trained NN having been trained w.r.t zones corresponding to subranges of the wavelength range.

Control sequence generation system and methods

NºPublicación:  US2025252313A1 07/08/2025
Solicitante: 
PASSIVELOGIC INC [US]
PassiveLogic, Inc
US_2024160936_PA

Resumen de: US2025252313A1

A model receives a target demand curve as an input and outputs an optimized control sequence that allows equipment within a physical space to be run optimally. A thermodynamic model is created that represents equipment within the physical space, with the equipment being laid out as nodes within the model according to the equipment flow in the physical space. The equipment activation functions comprise equations that mimic equipment operation. Values flow between the nodes similarly to how states flow between the actual equipment. The model is run such that a control sequence is used as input into the neural network; the neural network outputs a demand curve which is then checked against the target demand curve. Machine learning methods are then used to determine a new control sequence. The model is run until a goal state is reached.

COMBINED DENOISING AND UPSCALING NETWORK WITH IMPORTANCE SAMPLING IN A GRAPHICS ENVIRONMENT

NºPublicación:  US2025252530A1 07/08/2025
Solicitante: 
INTEL CORP [US]
Intel Corporation
CN_117546200_PA

Resumen de: US2025252530A1

An apparatus to facilitate combined denoising and upscaling network with importance sampling in a graphics environment is disclosed. The apparatus includes set of processing resources including circuitry configured to: receive, at an input of a density map neural network, a sampled signal of a current frame and a reconstructed sample of the current frame; output, from the density map neural network, a prediction of a density map of samples based on the input of the current frame; provide the density map of samples to a sampler; reproject the density map of samples to a next frame; and apply the reprojected density map of samples to the next frame to generate a next sampled signal.

GRAPHICS ARCHITECTURE INCLUDING A NEURAL NETWORK PIPELINE

NºPublicación:  US2025252650A1 07/08/2025
Solicitante: 
INTEL CORP [US]
Intel Corporation
US_2023360307_PA

Resumen de: US2025252650A1

One embodiment provides a graphics processor comprising a block of graphics cores and circuitry including a programmable neural network unit, the programmable neural network unit including one or more neural network hardware blocks, wherein a neural network hardware block includes circuitry to perform neural network operations and activation operations for a layer of a neural network, the programmable neural network unit addressable by cores within the block of graphics cores, wherein the programmable neural network unit is to configure one or more neural network hardware blocks with a meta-shader neural network, the meta-shader neural network to generate a texture for one of multiple types of terrain.

GENERALIST COMBINATIONAL OPTIMIZATION AGENT LEARNER

NºPublicación:  US2025249583A1 07/08/2025
Solicitante: 
NAVER CORP [KR]
NAVER CORPORATION
EP_4597374_PA

Resumen de: US2025249583A1

A method for robot navigation includes: receiving a backbone neural network, the backbone neural network trained to solve a set of autonomous navigation tasks including a first autonomous navigation task, where the first autonomous navigation task includes optimizing a first path visiting a first number of first locations when considering first path costs; receiving a second autonomous navigation task, where the second autonomous navigation task includes optimizing a second path visiting a second number of second locations when considering second path costs, where the second locations are locations in an environment of an autonomous machine, where the second number is different from the first number; configuring adapter layers for the backbone neural network for forming a neural network pipeline; and feeding the second locations and the second path costs to the neural network pipeline and determining a path for the autonomous machine based on an output.

FIRST NODE, SECOND NODE AND METHODS PERFORMED THEREBY FOR HANDLING A KNOWLEDGE GRAPH

NºPublicación:  WO2025163648A1 07/08/2025
Solicitante: 
ERICSSON TELEFON AB L M [SE]
SADASIVAN JISHNU [IN]
TELEFONAKTIEBOLAGET LM ERICSSON (PUBL),
SADASIVAN, Jishnu
WO_2025163648_PA

Resumen de: WO2025163648A1

A computer-implemented method performed by a first node (111). The method is for handling a first KG. The first node (111) operates in a communications system (100). The first node (111) obtains (405) data encoding conversational speech. The first node (111) also obtains (406) the features extracted from the data, refraining from converting the data into text. The features are short-term features, The first node (111) determines (407), using an artificial neural network (121), a first KG representing words in a conversational speech and relations of the words. The determining (407) uses, as first input, the obtained features extracted from the data. The first node (111) then initiates (408) outputting an indication of the determined a first KG.

ELECTRONIC DEVICE AND CONTROL METHOD THEREOF

NºPublicación:  WO2025164944A1 07/08/2025
Solicitante: 
SAMSUNG ELECTRONICS CO LTD [KR]
\uC0BC\uC131\uC804\uC790\uC8FC\uC2DD\uD68C\uC0AC
WO_2025164944_PA

Resumen de: WO2025164944A1

This electronic device comprises a memory, and a processor connected to the memory, wherein the processor: when a first user voice in a first language is received, obtains a first translated text in a second language corresponding to the first user voice by inputting the first user voice to a first neural network model; obtains a first keyword text in the second language corresponding to the first user voice by inputting the first user voice to a second neural network model; when a second user voice in the first language is received, obtains a second translated text in the second language corresponding to the second user voice by inputting the second user voice and the first keyword text to the first neural network model; and provides the first translated text and the second translated text.

AUTOMATED AND SEMI-AUTOMATED EXTRACTION OF DATA FROM TABLES AND GRAPHS IN SCIENTIFIC LITERATURE

NºPublicación:  WO2025165252A1 07/08/2025
Solicitante: 
EVIDENCE PRIME SP Z O O [PL]
EVIDENCE PRIME SP. Z O.O
WO_2025165252_PA

Resumen de: WO2025165252A1

A system and method for automatically extracting data from tables and graphs in scientific literature, particularly in life sciences and healthcare, is presented. The invention employs a hybrid approach combining computer vision, natural language processing (NLP), and a large language model (LLM)-based data extraction module. A neural network enhances accuracy by providing contextual information. The system performs table structure detection, optical character recognition (OCR), Vision Transformers (ViTs) for text recognition, and graph-to- table conversion. An LLM refines the extracted data using advanced prompt engineering. A user interface enables data review, validation, and iterative refinement. The system employs a novel two-stage approach: de-rendering tables and graphs into a machine-readable format, followed by interpretation and user-defined mapping. A knowledge graph enhances extraction by resolving ambiguities and inferring relationships. Designed for scalability and continuous improvement, the invention significantly enhances data extraction efficiency and accuracy, accelerating research and knowledge discovery.

3D PHOTONIC-ELECTRONIC NEUROMORPHIC COMPUTING

NºPublicación:  WO2025165752A1 07/08/2025
Solicitante: 
THE REGENTS OF THE UNIV OF CALIFORNIA [US]
THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
WO_2025165752_PA

Resumen de: WO2025165752A1

A combination of photonic neural networks and electronic neural networks integrated in 3D to form 3D Electronic-Photonic Integrated Circuits (3D EPICs) are described toward enabling new brain-derived neuromorphic hardware with energy-efficiency, connectivity, density, and scalability. Described are the construction of the optoelectronic (OE) neurons in the photonic neural network (PNN) including photonic-memristive dendrites, photonic-memristive synapses, photonic axons, and nano-electronic somas. These OE neurons and their hierarchical interconnections are described in constructing 3D EPICs. The use of a scalable PNN neuromorphic computing simulator is described, as well as PNN training.

METHOD, SYSTEM, AND COMPUTER PROGRAM PRODUCT FOR DETERMINING FEATURE IMPORTANCE USING SHAPLEY VALUES ASSOCIATED WITH A MACHINE LEARNING MODEL

NºPublicación:  WO2025166095A1 07/08/2025
Solicitante: 
VISA INT SERVICE ASS [US]
VISA INTERNATIONAL SERVICE ASSOCIATION
WO_2025166095_PA

Resumen de: WO2025166095A1

Methods, systems, and computer program products are provided for determining feature importance using Shapley values associated with a machine learning model. An example method includes training a classification machine learning model, performing a plurality of feature ablation procedures on the classification machine learning model using a plurality of features to provide a distribution of feature ablation outcomes, training an explainer neural network machine learning model based on the distribution of the feature ablation outcomes to provide a trained explainer neural network machine learning model, wherein the explainer neural network machine learning model is configured to provide an output that comprises a prediction of a Shapley value associated with a feature, and determining one or more Shapley values of an input feature using the explainer neural network machine learning model.

EXTRACTING RESPONSES FROM LANGUAGE MODEL NEURAL NETWORKS BY SCORING RESPONSE TOKENS

NºPublicación:  WO2025166268A1 07/08/2025
Solicitante: 
DEEPMIND TECH LTD [GB]
GDM HOLDING LLC [US]
DEEPMIND TECHNOLOGIES LIMITED,
GDM HOLDING LLC
WO_2025166268_PA

Resumen de: WO2025166268A1

Methods, systems, and apparatus for generating a response to a query input. In one aspect, a method includes receiving a query input including a sequence of input tokens and processing the query input using a language model neural network to generate multiple candidate output sequences. Each candidate output sequence includes a sequence of output tokens from a vocabulary of output tokens. For each output token, the method further includes identifying, as response tokens, a subset of the output tokens in the candidate output sequence and determining, from scores assigned by the language model neural network while generating the response tokens, a confidence score for the candidate output sequence. The method further includes selecting one of the candidate output sequences based on the confidence scores for the response tokens and generating a response to the query input from the selected candidate output sequence.

Systems and methods for optimizing hyperparameters for machine learning models

Nº publicación: GB2637695A 06/08/2025

Solicitante:

MICROSOFT TECHNOLOGY LICENSING LLC [US]
Microsoft Technology Licensing LLC

GB_2637695_PA

Resumen de: GB2637695A

A combined hyperparameter and proxy model tuning method is described. The method involves iterations for hyperparameters search 102. In each search iteration, candidate hyperparameters are considered. An initial (‘seed’) hyperparameter is determined by initialization function 110, and used to train (104) one or more first proxy models on a target dataset 101. From the first proxy model(s), one or more first synthetic datasets are sampled using sampling function 108. A first evaluation model is fitted to each first synthetic dataset, for each candidate hyperparameter, by applying fit function 106 enabling each candidate hyperparameter from hyperparameter generator 112 to be scored. Based on the respective scores assigned to the candidate hyperparameters, a candidate hyperparameter is selected and used (103) to train one or more second proxy models on the target dataset. Hyperparameter search may be random, grid and Bayesian. Scores by scoring function 114 can be F1 scores. Uses include generative causal model with neural network architectures.

traducir