Ministerio de Industria, Turismo y Comercio LogoMinisterior
 

Alerta

Resultados 16 results.
LastUpdate Updated on 05/01/2026 [07:07:00]
pdfxls
Solicitudes publicadas en los últimos 30 días / Applications published in the last 30 days
Results 1 to 16  

ADAPTIVE SAMPLE SELECTION FOR DATA ITEM PROCESSING

Publication No.:  EP4672031A1 31/12/2025
Applicant: 
GOOGLE LLC [US]
Google LLC

Absstract of: EP4672031A1

Methods, systems, and apparatuses, including computer programs encoded on computer storage media, for receiving a query relating to a data item that includes multiple data item samples and processing the query and the data item to generate a response to the query. In particular, the described techniques include adaptively selecting a subset of the data item samples using a selection neural network conditioned on features of the data item samples and the query. Then processing the subset and query using a downstream task neural network to generate a response to the query. By adaptively selecting the subset of data item samples according to the query, the described techniques generate responses to queries that are more accurate and require less computation resources than would be the case using other techniques.

GENERATION AND USAGE METHOD, DEVICE, ELECTRONIC DEVICE, AND PRODUCT OF GRAPH NEURAL NETWORK

Publication No.:  EP4672089A1 31/12/2025
Applicant: 
BOSCH GMBH ROBERT [DE]
Robert Bosch GmbH
CN_121234986_PA

Absstract of: EP4672089A1

Examples of the present disclosure involve a generation method, a usage method, a device, an electronic device, and a computer program product of a graph neural network. The generation method comprises generating at least one categorical embedding based on target domain knowledge. The generation method further comprises determining node embeddings of the training graph using at least one categorical embedding and original node features of the training graph. The generation method further comprises generating a graph neural network using the determined node embeddings of the training graph. The method of generating a graph neural network consistent with examples of the present disclosure are capable of infusing domain knowledge into a graph neural network such that the graph neural network understands the domain knowledge. In this way, the graph neural network can understand the true semantic and contextual information of the target domain, thus granting it domain generality.

TRAINING A PURSUER NEURAL NETWORK AGENT

Publication No.:  EP4672084A1 31/12/2025
Applicant: 
BAE SYSTEMS PLC [GB]
BAE SYSTEMS plc

Absstract of: EP4672084A1

The present invention relates to a computer-implemented method of training a pursuer neural network agent. The method comprising: providing an evader neural network agent trained to update iteratively a plurality of dynamic parameters of an evader, a pursuer neural network agent initialised to update iteratively a plurality of approximators for a pursuer, and an ordinary differential equation solver configured to update a plurality of dynamic parameters of the pursuer based on the updated approximators; sampling initial dynamic parameters from a set of initial dynamic parameters of the evader; running a simulation involving the pursuer neural network agent pursuing the evader neural network agent based on the sampled initial dynamic parameters; computing a loss based on the dynamic parameters of the pursuer from the ordinary differential equation solver and the dynamic parameters of the evader from the evader neural network agent; and optimising weights of the pursuer neural network agent based on the computed loss.

PROOF VERIFICATION DEVICE AND PROOF VERIFICATION METHOD

Publication No.:  WO2025262917A1 26/12/2025
Applicant: 
NTT INC [JP]
\uFF2E\uFF34\uFF34\u682A\u5F0F\u4F1A\u793E

Absstract of: WO2025262917A1

This proof verification device includes: a forward propagation unit that guarantees, by zero-knowledge proof, the correctness of computation in forward propagation processing of a batch normalization system of a neural network or a deep neural network; and an inverse propagation unit that guarantees, by zero-knowledge proof, the correctness of computation in the inverse propagation processing of the neural network or deep neural network.

TRAINING ENCODER MODEL AND/OR USING TRAINED ENCODER MODEL TO DETERMINE RESPONSIVE ACTION(S) FOR NATURAL LANGUAGE INPUT

Publication No.:  US2025384350A1 18/12/2025
Applicant: 
GOOGLE LLC [US]
GOOGLE LLC
EP_4400983_A1

Absstract of: US2025384350A1

Systems, methods, and computer readable media related to: training an encoder model that can be utilized to determine semantic similarity of a natural language textual string to each of one or more additional natural language textual strings (directly and/or indirectly); and/or using a trained encoder model to determine one or more responsive actions to perform in response to a natural language query. The encoder model is a machine learning model, such as a neural network model. In some implementations of training the encoder model, the encoder model is trained as part of a larger network architecture trained based on one or more tasks that are distinct from a “semantic textual similarity” task for which the encoder model can be used.

SYSTEM AND METHOD FOR NEURAL NETWORK ORCHESTRATION

Publication No.:  US2025384661A1 18/12/2025
Applicant: 
VERITONE INC [US]
Veritone, Inc
US_2024312184_PA

Absstract of: US2025384661A1

Methods and systems for training one or more neural networks for transcription and for transcribing a media file using the trained one or more neural networks are provided. One of the methods includes: segmenting the media file into a plurality of segments; inputting each segment, one segment at a time, of the plurality of segments into a first neural network trained to perform speech recognition; extracting outputs, one segment at a time, from one or more layers of the first neural network; and training a second neural network to generate a predicted-WER (word error rate) of a plurality of transcription engines for each segment based at least on outputs from the one or more layers of the first neural network.

METHOD AND DEVICE FOR TRAINING A NEURAL NETWORK BASED DECODER

Publication No.:  US2025384277A1 18/12/2025
Applicant: 
ORANGE [FR]
ORANGE
CN_119384671_PA

Absstract of: US2025384277A1

A device and a method for training a neural network based decoder. The method includes during the training, quantizing, using a training quantizer, parameters representative of the coefficients of the neural network based decoder. A method and device are also provided for encoding at least parameters representative of the coefficients of a neural network based decoder. Provided also are a method for generating an encoded bitstream including an encoded neural network based decoder, a neural network based encoder and decoder, and a signal encoded using the neural network based encoder.

AUTOMATED CREATION OF DIGITAL TWINS USING GRAPH-BASED INDUSTRIAL DATA

Publication No.:  US2025384266A1 18/12/2025
Applicant: 
SIEMENS AG [DE]
Siemens Aktiengesellschaft
CN_120283207_PA

Absstract of: US2025384266A1

A computer-implemented method for automatically creating a digital twin of an industrial system having one or more devices includes accessing a triple store that includes an aggregated ontology of graph-based industrial data synchronized with the one or more devices. The triple store is queried for a specified device to extract, from the graph-based industrial data, structural information of the specified device defined by a tree comprising a hierarchy of nodes. For each node, a neural network element is assigned based on a mapping of node types to pre-defined neural network elements. The assigned neural network elements are combined based on the tree topology to create a digital twin neural network. The triple store is then queried to extract, form the graph-based industrial data, real-time process data gathered from the specified device at runtime and use the extracted real-time process data to tune parameters of the digital twin neural network.

MACHINE LEARNING SPARSE COMPUTATION MECHANISM FOR ARBITRARY NEURAL NETWORKS, ARITHMETIC COMPUTE MICROARCHITECTURE, AND SPARSITY FOR TRAINING MECHANISM

Publication No.:  US2025384257A1 18/12/2025
Applicant: 
INTEL CORP [US]
Intel Corporation
ES_2993741_T3

Absstract of: US2025384257A1

An apparatus to facilitate processing of a sparse matrix for arbitrary graph data is disclosed. The apparatus includes a graphics processing unit having a data management unit (DMU) that includes a scheduler for scheduling matrix operations, an active logic for tracking active input operands, and a skip logic for tracking unimportant input operands to be skipped by the scheduler. Processing circuitry is coupled to the DMU. The processing circuitry comprises a plurality of processing elements including logic to read operands and a multiplication unit to multiply two or more operands for the arbitrary graph data and customizable circuitry to provide custom functions.

COMPUTER-IMPLEMENTED METHOD AND ARRANGEMENT FOR OPERATING A RECOMMENDATION SYSTEM, COMPUTER-READABLE DATA CARRIER AND COMPUTER PROGRAM PRODUCT

Publication No.:  WO2025257015A1 18/12/2025
Applicant: 
SIEMENS AG [DE]
SIEMENS AKTIENGESELLSCHAFT
WO_2025257015_A1

Absstract of: WO2025257015A1

The invention relates to a computer-implemented method for operating a recommendation system which involves a) converting at least one part of at least one 3D CAD model capable of acquiring so-called "product manufacturing information", PMI, data into a graph representation in such a way that the graph representation comprises existing PMI data of the 3D CAD model, b) training a so-called "graph neural network", GNN, model at least at a first, in particular initial, point in time, the GNN model being trained on the basis of at least the graph representation, c) generating at least one recommendation output concerning at least the part of the 3D CAD model, in particular an output by the recommendation system on the basis of the graph representation and the GNN model. Furthermore, the invention relates to an arrangement for carrying out the method, to a computer-readable data carrier, and to a computer program product.

OBJECTIVE-CONDITIONED GENERATIVE NEURAL NETWORKS

Publication No.:  WO2025260090A1 18/12/2025
Applicant: 
DEEPMIND TECH LIMITED [GB]
GDM HOLDING LLC [US]
DEEPMIND TECHNOLOGIES LIMITED,
GDM HOLDING LLC
WO_2025260090_A1

Absstract of: WO2025260090A1

Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training a generative neural network. In particular, the generative neural network is trained on an objective function that includes multiple different objectives, with two or more of the objectives being reward objectives.

ANOMALY DETECTING EDGE DEVICE WITH QUANTIZED DEEP NEURAL NETWORK ON FPGA, ASIC, OR SOC

Publication No.:  WO2025255347A1 11/12/2025
Applicant: 
VIRGINIA COMMONWEALTH UNIV [US]
VIRGINIA COMMONWEALTH UNIVERSITY
WO_2025255347_PA

Absstract of: WO2025255347A1

Systems and methods are described for an edge device with a chip-implemented, reduced-bit-quantized neural network (NN)-based anomaly detector, including a preprocessing logic block for preprocessing the sensor data, and a quantized, processing-in-memory (PIM) based NN block. The quantized PIM-based NN block includes quantized multiply-accumulate units (MACs) with quantized PIM logic that uses quantized-arithmetic-result lookup tables. The quantized MACs are arranged to function as quantized, PIM-based NN block kernels. Optionally, the quantized PIM- based NN block is implemented as an autoencoder with a quantized NN encoding and a quantized NN decoding section. The quantized NN encoder section performs NN-based, quantized dimensionality reduction of the processed sensor data, outputting a quantized lower-dimensional latent representation. The quantized NN decoder section reconstructs the quantized lower-dimensional latent representation, producing a quantized reconstructed input pattern. The quantized NN encoder and decoder sections include quantized MACs with PIM logic using quantized- arithmetic-result lookup tables.

PERSONALIZED GENERATIVE NEURAL NETWORKS

Publication No.:  WO2025254658A1 11/12/2025
Applicant: 
GOOGLE LLC [US]
GOOGLE LLC
WO_2025254658_PA

Absstract of: WO2025254658A1

Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for generating personalized output data items using a generative neural network. In one aspect, one of the method includes maintaining a respective set of adaptation parameters of a generative neural network for each of a plurality of predetermined content providers; obtaining a new input associated with a new content provider; using a classifier model to generate a respective score for each of the plurality of predetermined content providers; generating a new set of adaptation parameters based on determining a weighted combination of the respective set of adaptation parameters for each of a subset of the plurality of predetermined content providers; and generating, by the generative neural network and based on the new set of adaptation parameters and the new input, a new personalized output data item.

SPARSITY CONTROL BASED ON HARDWARE FOR DEEP-NEURAL NETWORKS

Publication No.:  US2025378334A1 11/12/2025
Applicant: 
INTEL CORP [US]
Intel Corporation
US_2025378334_PA

Absstract of: US2025378334A1

Systems, methods, computer program products, and apparatuses to transform a weight space of an inference model to increase the compute efficiency of a target inference platform. A density of a weight space can be determined, and a transformation parameter derived based on the determined density. The weight space can be re-ordered based on the transformation parameter to balance the compute load between the processing elements (PEs) of the target platform, and as such, reduce the idle time and/or stalls of the PEs.

INFORMATION PROCESSING METHOD AND APPARATUS, DEVICE, AND STORAGE MEDIUM

Publication No.:  WO2025251721A1 11/12/2025
Applicant: 
DOUYIN VISION CO LTD [CN]
\u6296\u97F3\u89C6\u754C\u6709\u9650\u516C\u53F8
WO_2025251721_PA

Absstract of: WO2025251721A1

According to embodiments of the present disclosure, provided are an information processing method and apparatus, a device, and a storage medium. The method comprises: determining a target processing layer from among a plurality of processing layers of a graph neural network, wherein a processing procedure corresponding to the target processing layer comprises a plurality of sub-processing procedures independent of each other; splitting the target processing layer into a plurality of sub-layers corresponding to the plurality of sub-processing procedures; using a plurality of processing devices to respectively process the plurality of sub-processing procedures corresponding to the plurality of sub-layers; and using a plurality of output results of processing of the plurality of processing devices in a summary layer corresponding to the plurality of sub-layers to determine a target output result for the target processing layer.

RPA RPA workflow managing system and method based on an artificial neural network

Nº publicación: KR20250172052A 09/12/2025

Applicant:

김찬호

KR_20250172052_PA

Absstract of: KR20250172052A

RPA(Robtic Process Automation) 워크플로우 관리 시스템은, 적어도 하나 이상의 클라이언트 컴퓨팅 장치와 RPA 관리 컴퓨팅 장치를 포함하고, 클라이언트 컴퓨팅 장치의 프로세서는, 제1 RPA 워크플로우의 개시 조건이 충족되는 것에 응답하여 제1 RPA 로봇이 상기 제1 RPA 워크플로우를 개시하는 단계; 제1 RPA 워크플로우와 관련되는 제1 시점까지의 워크플로우 영상을 획득하고, 획득된 워크플로우 영상을 인코딩하여, 워크플로우 영상에 대한 제1 잠재 벡터를 생성하고, 제1 RPA 로봇과 통신가능한 RPA 관리 컴퓨팅 장치로 상기 잠재 벡터를 전송하는 단계를 수행하고, RPA 관리 컴퓨팅 장치의 마스터 프로세서는 제1 잠재 벡터와 미리 마련된 기준 잠재 벡터 간의 편차에 따라 적응적으로 상기 제1 잠재 벡터를 RPA 작업 예측 모델에 적용하여, 상기 제1 시점보다 후행하는 제2 시점에서의 예측 작업 영상을 생성하는 단계를 수행한다.

traducir