Ministerio de Industria, Turismo y Comercio LogoMinisterior
 

Alerta

Resultados 33 resultados
LastUpdate Última actualización 21/10/2025 [07:24:00]
pdfxls
Solicitudes publicadas en los últimos 30 días / Applications published in the last 30 days
previousPage Resultados 25 a 33 de 33  

VIDEO UPSAMPLING USING ONE OR MORE NEURAL NETWORKS

NºPublicación:  US2025299295A1 25/09/2025
Solicitante: 
NVIDIA CORP [US]
NVIDIA Corporation
US_12394015_PA

Resumen de: US2025299295A1

Apparatuses, systems, and techniques to enhance video are disclosed. In at least one embodiment, one or more neural networks are used to create a higher resolution video using upsampled frames from a lower resolution video.

INFORMATION PROCESSING APPARATUS, INFERENCE METHOD, AND STORAGE MEDIUM

NºPublicación:  US2025299051A1 25/09/2025
Solicitante: 
CANON KK [JP]
CANON KABUSHIKI KAISHA

Resumen de: US2025299051A1

An information processing apparatus configured to execute inference using a convolutional neural network, including: an obtainment unit configured to obtain target data from data for inference inputted in the information processing apparatus; and a computation unit configured to execute convolutional computation and output computation result data, the convolutional computation using computation data including the target data obtained by the obtainment unit and margin data different from the target data that is required to obtain the computation result data in a predetermined size, in which the obtainment unit obtains first data, which is a part of the margin data, from a data group existing around the target data separately from the target data in the data for inference and doses not obtain second data, which is the margin data except the first data, from the data group.

SYSTEM AND METHODS FOR DISTRIBUTED LEARNING IN A RADIO ACCESS NETWORK WITH MULTIPLE AGENTS

NºPublicación:  WO2025194307A1 25/09/2025
Solicitante: 
HUAWEI TECH CO LTD [CN]
HUAWEI TECHNOLOGIES CO., LTD
WO_2025194307_PA

Resumen de: WO2025194307A1

A network node is configured to receive global network states (GNSs) from a central node, each GNS representing the state of a global environment of the network for a time-based criterion. The network node then generates multiple local neural network (NN) models, each corresponding to at least one GNS, and is configured to receive a local observation vector (OV) based on an environment state of the ancillary node and generate a local action vector (AV) accordingly. The network node also receives environment states from neighbor nodes, combines them with its own environment state to identify a specific GNS, selects a local NN model that corresponds to that GNS, and uses it to generate the local AV for application to its environment. The network states can include the demands on the resources at the network node and the local AV can include an adjustment in the allocation of the resources according to the demands.

FEATURE-SPECIFIC ATTENTION ARRAYS FOR EVENT SEQUENCE CHARACTERIZATION

NºPublicación:  WO2025199173A1 25/09/2025
Solicitante: 
CAPITAL ONE SERVICES LLC [US]
CAPITAL ONE SERVICES, LLC
US_2025299066_PA

Resumen de: WO2025199173A1

A method and related system for efficiently capturing relationships between event feature values in embeddings includes flattening an event sequence into a feature sequence including a first event prefix, a second event prefix, and a first set of feature values. The method includes generating an attention mask including first mask indicators to associate the first set of feature values with each other and second mask indicator to associate a first feature value of the first set of feature values with the second event prefix. The method includes providing the feature sequence and the attention mask to a self-attention neural network model to generate an embedding.

REAL-TIME DETECTION OF NETWORK THREATS USING A GRAPH-BASED MODEL

NºPublicación:  WO2025199388A1 25/09/2025
Solicitante: 
UNIV ILLINOIS [US]
THE BOARD OF TRUSTEES OF THE UNIVERSITY OF ILLINOIS
WO_2025199388_PA

Resumen de: WO2025199388A1

The present disclosure gives methods and systems to perform intrusion detection on a computing system using streaming embedding and detection alongside other improvements. Intrusion detection may be implemented by recording events occurring within a computing system in an audit log. From this audit log, a provenance graph representing the events and causal relationships of the events occurring within the computing system may be generated. The provenance graph may be supplemented, by a pseudo-graph that connects each event occurring in the computing system to one or more root causes. Then, a neural network may be trained to represent behavior of the computing system based on this pseudo-graph. The present disclosure also gives other systems and methods of intrusion detection and modeling computing system behavior.

CUSTOMIZED MACHINE LEARNING MODELS

NºPublicación:  US2025299041A1 25/09/2025
Solicitante: 
AMAZON TECH INC [US]
Amazon Technologies, Inc
US_12354002_PA

Resumen de: US2025299041A1

An adapter layer may be used to customize a machine learning component by transforming data flowing into, out of, and/or within the machine learning component. The adapter layer may include a number of neural network components, or “adapters,” configured to perform a transformation on input data. Neural network components may be configured into adapter groups. A router component can, based on the input data, select one or more neural network components for transforming the input data. The input layer may combine the results of any such transformations to yield adapted data. Different adapter groups can include adapters of different complexity (e.g., involving different amounts of computation and/or latency). Thus, the amount of computation or latency added by an adapter layer can be reduced for simpler transformations of the input data.

DYNAMIC PRECISION FOR NEURAL NETWORK COMPUTE OPERATIONS

NºPublicación:  US2025299032A1 25/09/2025
Solicitante: 
INTEL CORP [US]
Intel Corporation
ES_2986903_T3

Resumen de: US2025299032A1

In an example, an apparatus comprises a compute engine comprising a high precision component and a low precision component; and logic, at least partially including hardware logic, to receive instructions in the compute engine; select at least one of the high precision component or the low precision component to execute the instructions; and apply a gate to at least one of the high precision component or the low precision component to execute the instructions. Other embodiments are also disclosed and claimed.

EFFICIENCY ADJUSTABLE SPEECH RECOGNITION SYSTEM

Nº publicación: EP4621769A2 24/09/2025

Solicitante:

MICROSOFT TECHNOLOGY LICENSING LLC [US]
Microsoft Technology Licensing, LLC

EP_4621769_PA

Resumen de: EP4621769A2

A computing system is configured to generate a transformer-transducer-based deep neural network. The transformer-transducer-based deep neural network comprises a transformer encoder network and a transducer predictor network. The transformer encoder network has a plurality of layers, each of which includes a multi-head attention network sublayer and a feed-forward network sublayer. The computing system trains an end-to-end (E2E) automatic speech recognition (ASR) model, using the transformer-transducer-based deep neural network. The E2E ASR model has one or more adjustable hyperparameters that are configured to dynamically adjust an efficiency or a performance of E2E ASR model when the E2E ASR model is deployed onto a device or executed by the device.

traducir