Ministerio de Industria, Turismo y Comercio LogoMinisterior
 

Alerta

Resultados 46 results.
LastUpdate Updated on 17/10/2025 [07:37:00]
pdfxls
Solicitudes publicadas en los últimos 30 días / Applications published in the last 30 days
previousPage Results 25 to 46 of 46  

FEATURE-SPECIFIC ATTENTION ARRAYS FOR EVENT SEQUENCE CHARACTERIZATION

Publication No.:  US2025299066A1 25/09/2025
Applicant: 
CAPITAL ONE SERVICES LLC [US]
Capital One Services, LLC
US_2025299066_PA

Absstract of: US2025299066A1

A method and related system for efficiently capturing relationships between event feature values in embeddings includes flattening an event sequence into a feature sequence including a first event prefix, a second event prefix, and a first set of feature values. The method includes generating an attention mask including first mask indicators to associate the first set of feature values with each other and second mask indicator to associate a first feature value of the first set of feature values with the second event prefix. The method includes providing the feature sequence and the attention mask to a self-attention neural network model to generate an embedding.

SOLUTIONS DELIVERY - SOLUTIONS DISCOVERY TOOL

Publication No.:  US2025299067A1 25/09/2025
Applicant: 
TRUIST BANK [US]
Truist Bank
US_2025299067_PA

Absstract of: US2025299067A1

A system and method for automatically providing a bank agent with questions to ask a client of the bank based on known information about the client and answers to previous questions provided to the client, and then providing a financial solution or product that may help the client. The method includes asking the client an initial question, providing an answer by the client to the initial question, providing a follow-up question in response to the answer provided to the initial question that is generated by a machine learning model in a processor, accepting an answer to the follow-up question, and providing additional follow-up questions in response to previous questions and answers that are generated by the machine learning model, where the machine learning model uses at least one neural network having nodes that have been trained to provide the questions based on the previous questions and answers.

VIDEO UPSAMPLING USING ONE OR MORE NEURAL NETWORKS

Publication No.:  US2025299295A1 25/09/2025
Applicant: 
NVIDIA CORP [US]
NVIDIA Corporation
US_12394015_PA

Absstract of: US2025299295A1

Apparatuses, systems, and techniques to enhance video are disclosed. In at least one embodiment, one or more neural networks are used to create a higher resolution video using upsampled frames from a lower resolution video.

SYSTEM AND METHODS FOR DISTRIBUTED LEARNING IN A RADIO ACCESS NETWORK WITH MULTIPLE AGENTS

Publication No.:  WO2025194307A1 25/09/2025
Applicant: 
HUAWEI TECH CO LTD [CN]
HUAWEI TECHNOLOGIES CO., LTD
WO_2025194307_PA

Absstract of: WO2025194307A1

A network node is configured to receive global network states (GNSs) from a central node, each GNS representing the state of a global environment of the network for a time-based criterion. The network node then generates multiple local neural network (NN) models, each corresponding to at least one GNS, and is configured to receive a local observation vector (OV) based on an environment state of the ancillary node and generate a local action vector (AV) accordingly. The network node also receives environment states from neighbor nodes, combines them with its own environment state to identify a specific GNS, selects a local NN model that corresponds to that GNS, and uses it to generate the local AV for application to its environment. The network states can include the demands on the resources at the network node and the local AV can include an adjustment in the allocation of the resources according to the demands.

FEATURE-SPECIFIC ATTENTION ARRAYS FOR EVENT SEQUENCE CHARACTERIZATION

Publication No.:  WO2025199173A1 25/09/2025
Applicant: 
CAPITAL ONE SERVICES LLC [US]
CAPITAL ONE SERVICES, LLC
US_2025299066_PA

Absstract of: WO2025199173A1

A method and related system for efficiently capturing relationships between event feature values in embeddings includes flattening an event sequence into a feature sequence including a first event prefix, a second event prefix, and a first set of feature values. The method includes generating an attention mask including first mask indicators to associate the first set of feature values with each other and second mask indicator to associate a first feature value of the first set of feature values with the second event prefix. The method includes providing the feature sequence and the attention mask to a self-attention neural network model to generate an embedding.

REAL-TIME DETECTION OF NETWORK THREATS USING A GRAPH-BASED MODEL

Publication No.:  WO2025199388A1 25/09/2025
Applicant: 
UNIV ILLINOIS [US]
THE BOARD OF TRUSTEES OF THE UNIVERSITY OF ILLINOIS
WO_2025199388_PA

Absstract of: WO2025199388A1

The present disclosure gives methods and systems to perform intrusion detection on a computing system using streaming embedding and detection alongside other improvements. Intrusion detection may be implemented by recording events occurring within a computing system in an audit log. From this audit log, a provenance graph representing the events and causal relationships of the events occurring within the computing system may be generated. The provenance graph may be supplemented, by a pseudo-graph that connects each event occurring in the computing system to one or more root causes. Then, a neural network may be trained to represent behavior of the computing system based on this pseudo-graph. The present disclosure also gives other systems and methods of intrusion detection and modeling computing system behavior.

INFORMATION PROCESSING APPARATUS, INFERENCE METHOD, AND STORAGE MEDIUM

Publication No.:  US2025299051A1 25/09/2025
Applicant: 
CANON KK [JP]
CANON KABUSHIKI KAISHA

Absstract of: US2025299051A1

An information processing apparatus configured to execute inference using a convolutional neural network, including: an obtainment unit configured to obtain target data from data for inference inputted in the information processing apparatus; and a computation unit configured to execute convolutional computation and output computation result data, the convolutional computation using computation data including the target data obtained by the obtainment unit and margin data different from the target data that is required to obtain the computation result data in a predetermined size, in which the obtainment unit obtains first data, which is a part of the margin data, from a data group existing around the target data separately from the target data in the data for inference and doses not obtain second data, which is the margin data except the first data, from the data group.

EFFICIENCY ADJUSTABLE SPEECH RECOGNITION SYSTEM

Publication No.:  EP4621769A2 24/09/2025
Applicant: 
MICROSOFT TECHNOLOGY LICENSING LLC [US]
Microsoft Technology Licensing, LLC
EP_4621769_PA

Absstract of: EP4621769A2

A computing system is configured to generate a transformer-transducer-based deep neural network. The transformer-transducer-based deep neural network comprises a transformer encoder network and a transducer predictor network. The transformer encoder network has a plurality of layers, each of which includes a multi-head attention network sublayer and a feed-forward network sublayer. The computing system trains an end-to-end (E2E) automatic speech recognition (ASR) model, using the transformer-transducer-based deep neural network. The E2E ASR model has one or more adjustable hyperparameters that are configured to dynamically adjust an efficiency or a performance of E2E ASR model when the E2E ASR model is deployed onto a device or executed by the device.

NEURAL NETWORK TRAINING FOR VIDEO GENERATION

Publication No.:  US2025292125A1 18/09/2025
Applicant: 
REVEALIT CORP [US]
Revealit Corporation
US_2025292125_PA

Absstract of: US2025292125A1

A computer-implemented video generation training method and system performs unsupervised training of neural networks using training sets that comprise images, which may be sequentially arranged as videos. The unsupervised training includes obscuring subsets of pixels that are within each of the images. During the training the neural networks automatically learn correspondences among subsets of pixels in the images. An instruction is received from a user and representations of pixel patterns are generated by the trained computer-implemented neural networks in response to the instruction. The pixel patterns are included within a video stream that is provided to the user.

GRAPH-BASED ENTITY RESOLUTION FRAMEWORK USING VARIATIONAL INFERENCE

Publication No.:  US2025292065A1 18/09/2025
Applicant: 
ORACLE INT CORP [US]
Oracle International Corporation
US_2025292065_PA

Absstract of: US2025292065A1

The present disclosure relates to entity resolution between graphs of entities and their relations. A language model (LM) and a graph neural network (GNN) may be iteratively trained. A plurality of first node embeddings for a plurality of nodes in a graph may be generated using the LM. A plurality of second node embeddings for the plurality of nodes based at least in part on the plurality of first node embeddings and the graph may be generated using the GNN. A first node and a second node of the plurality of nodes that both represent a particular entity may be identified based at least in part on a similarity between one of the plurality of second node embeddings associated with the first node and one of the plurality of second node embeddings associated with the second node

System and Method for Accurate Responses from Chatbots and LLMs

Publication No.:  US2025291828A1 18/09/2025
Applicant: 
ACURAI INC [US]
Acurai, Inc
US_2025291828_PA

Absstract of: US2025291828A1

Systems and methods are described for obtaining accurate responses from large language models (LLMs) and chatbots, including for question and answering, exposition, and summarization. These systems and methods accomplish these objectives via use of noun phrase avoiding processes such as a noun phrase collision detection process, a query splitting process, and a topical splitting process as well as by use of formatted facts, formatted fact model correction interfaces (FF MCIs), bounded-scope deterministic (BSD) neural networks, processes and methods, and intelligent storage and retrieval (ISAR) systems and methods. These systems and methods avoid and bypass noun phrase collisions and correct for errors caused by noun phrase collisions so that hallucinations are eliminated from LLM responses.

ARTIFICIAL NEURAL NETWORK COMPUTING SYSTEMS

Publication No.:  US2025291405A1 18/09/2025
Applicant: 
CIRRUS LOGIC INT SEMICONDUCTOR LTD [GB]
Cirrus Logic International Semiconductor Ltd
US_2025291405_PA

Absstract of: US2025291405A1

The present disclosure relates to an artificial neural network (ANN) computing system comprising: a buffer configured to store data indicative of input data received from an input device; an inference engine operative to process data from the buffer to generate an interest metric for the input data; and a controller. The controller is operative to control a mode of operation of the inference engine according to the interest metric for the input data.

AI MODEL PROTECTION FOR AI PCS

Publication No.:  US2025292357A1 18/09/2025
Applicant: 
INTEL CORP [US]
Intel Corporation
US_2025292357_PA

Absstract of: US2025292357A1

One embodiment provides a graphics processor comprising a base die including a plurality of chiplet sockets and a plurality of chiplets coupled with the plurality of chiplet sockets. At least one of the plurality of chiplets include a graphics processing cluster including a plurality of processing resources. The plurality of processing resources including a matrix accelerator having circuitry to perform operations for a neural network in which model topology and weights of the neural network are encrypted. The matrix accelerator configured to execute commands of a command buffer, the commands generated based on a decomposition of the model topology of the neural network and access encrypted weights in memory of the graphics processor via circuitry configured to decrypt the encrypted weights via a key that is programmed to the hardware of the circuitry.

SPACE EFFICIENT TRAINING FOR SEQUENCE TRANSDUCTION MACHINE LEARNING

Publication No.:  US2025292764A1 18/09/2025
Applicant: 
MICROSOFT TECHNOLOGY LICENSING LLC [US]
Microsoft Technology Licensing, LLC
US_2025292764_PA

Absstract of: US2025292764A1

Efficient training is provided for models comprising RNN-T (recurrent neural network transducers). The model transducers comprise an encoder, a decoder, and a fused joint network. The fused joint network receives encoding and decoding embeddings from the encoder and decoder. During training, the model stores the probability data for the next blank output and the next token at each time step rather than storing all probabilities for all possible outputs. This can significantly reduce requirements for memory storage, while still preserving the relevant information required to calculate the loss that will be backpropagated through the neural transducer during training to update the parameters of the neural transducer and to generate a trained or modified neural transducer. The computation of embeddings can also be divided into small slices and some of the utterance padding used for the training samples can also be removed to further reduce the memory storage requirements.

OPTICAL INFORMATION READING DEVICE

Publication No.:  US2025292044A1 18/09/2025
Applicant: 
KEYENCE CO LTD [JP]
Keyence Corporation
US_2025292044_PA

Absstract of: US2025292044A1

To suppress an increase in processing time due to a load of inference processing while improving reading accuracy by the inference processing of machine learning. An optical information reading device includes a processor including: an inference processing part that inputs a code image to a neural network and executes inference processing of generating an ideal image corresponding to the code image; and a decoding processing part that executes first decoding processing of decoding the code image and second decoding processing of decoding the ideal image generated by the inference processing part. The processor executes the inference processing and the first decoding processing in parallel, and executes the second decoding processing after completion of the inference processing.

HIERARCHY OF NEURAL NETWORK SCALING FACTORS

Publication No.:  US2025292362A1 18/09/2025
Applicant: 
INTEL CORP [US]
Intel Corporation
US_2025292362_PA

Absstract of: US2025292362A1

Embodiments described herein provide techniques to facilitate hierarchical scaling when quantizing neural network data to a reduced-bit representation. The techniques includes operations to load a hierarchical scaling map for a tensor associated with a neural network, partition the tensor into a plurality of regions that respectively include one or more subregions based on the hierarchical scaling map, hierarchically scale numerical values of the tensor based on a first scale factor and second scale factor via the matrix accelerator circuitry, the first scale factor based on a statistical measure of a subregion of numerical values of within a region of the plurality of regions and the second scale factor based on a statistical measure of the region that includes the subregion, and generate a quantized representation of the tensor via quantization of hierarchically scaled numerical values.

CONTROLLING AN AGENT USING PRE-COMMITTED SEQUENCES OF ACTIONS

Publication No.:  WO2025190472A1 18/09/2025
Applicant: 
DEEPMIND TECH LTD [GB]
DEEPMIND TECHNOLOGIES LIMITED
WO_2025190472_PA

Absstract of: WO2025190472A1

Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for controlling an agent interacting with an environment by selecting actions to be performed by the agent using an action selection neural network. In one aspect, a method comprises, at each of a plurality of action selection iterations: receiving data identifying a current observation and a current pre-committed sequence of actions; processing a network input comprising: (i) the current observation, and (ii) the current pre-committed sequence of actions, using the action selection neural network, to generate an action selection output; selecting a next sequence of actions based on the action selection output, wherein the next sequence of actions comprises a predefined number of actions that define a next pre-committed sequence of actions; and causing the agent to perform the next pre-committed sequence of actions after the agent has performed the current pre-committed sequence of actions.

SYSTEM AND METHOD FOR ACCURATE RESPONSES FROM CHATBOTS AND LLMS

Publication No.:  WO2025193562A1 18/09/2025
Applicant: 
ACURAI INC [US]
ACURAI, INC
WO_2025193562_PA

Absstract of: WO2025193562A1

Systems and methods are described for obtaining accurate responses from large language models (LLMs) and chatbots, including for question and answering, exposition, and summarization. These systems and methods accomplish these objectives via use of noun phrase avoiding processes such as a noun phrase collision detection process, a query splitting process, and a topical splitting process as well as by use of formatted facts, formatted fact model correction interfaces (FF MCIs), bounded-scope deterministic (BSD) neural networks, processes and methods, and intelligent storage and retrieval (ISAR) systems and methods. These systems and methods avoid and bypass noun phrase collisions and correct for errors caused by noun phrase collisions so that hallucinations are eliminated from LLM responses.

AN EVOLUTIONARY SCHEME FOR TUNING STOCHASTIC NEUROMORPHIC OPTIMIZERS

Publication No.:  WO2025191315A1 18/09/2025
Applicant: 
ERICSSON TELEFON AB L M [SE]
TELEFONAKTIEBOLAGET LM ERICSSON (PUBL)
WO_2025191315_PA

Absstract of: WO2025191315A1

A computer-implemented method (200) for automated tuning of a stochastic spiking neural network (SSNN) for solving a combinatorial optimization problem (COP). The method includes (i) defining (205) a set of features of the COP. The method further includes (ii) building (210) the SSNN with an architecture based on the defined COP feature set. The method further includes (iii) selecting (215) a tunable set of parameters of the SSNN based on the architecture. The method further includes (iv) tuning (220) the selected tunable set of parameters using using a genetic algorithm - SSNN (GA-SSNN) model. The method further includes (v) implementing (225) the SSNN using the tuned set of parameters. The method further includes (vi) evaluating (230) the performance of the SSNN to determine whether a pre-determined criteria for solving the COP is met. The method further includes (vii) repeating (235) steps (iv) to (vi) until the performance meets the pre-determined criteria. The method further includes (viii) obtaining (240) the solution for solving the COP.

SPACE EFFICIENT TRAINING FOR SEQUENCE TRANSDUCTION MACHINE LEARNING

Publication No.:  EP4618073A2 17/09/2025
Applicant: 
MICROSOFT TECHNOLOGY LICENSING LLC [US]
Microsoft Technology Licensing, LLC
EP_4618073_PA

Absstract of: EP4618073A2

Efficient training is provided for models comprising RNN-T (recurrent neural network transducers). The model transducers comprise an encoder, a decoder, and a fused joint network. The fused joint network receives encoding and decoding embeddings from the encoder and decoder. During training, the model stores the probability data for the next blank output and the next token at each time step rather than storing all probabilities for all possible outputs. This can significantly reduce requirements for memory storage, while still preserving the relevant information required to calculate the loss that will be backpropagated through the neural transducer during training to update the parameters of the neural transducer and to generate a trained or modified neural transducer. The computation of embeddings can also be divided into small slices and some of the utterance padding used for the training samples can also be removed to further reduce the memory storage requirements.

HIERARCHY OF NEURAL NETWORK SCALING FACTORS

Nº publicación: EP4617952A1 17/09/2025

Applicant:

INTEL CORP [US]
INTEL Corporation

EP_4617952_PA

Absstract of: EP4617952A1

Embodiments described herein provide techniques to facilitate hierarchical scaling when quantizing neural network data to a reduced-bit representation. The techniques includes operations to load a hierarchical scaling map for a tensor associated with a neural network, partition the tensor into a plurality of regions that respectively include one or more subregions based on the hierarchical scaling map, hierarchically scale numerical values of the tensor based on a first scale factor and second scale factor via the matrix accelerator circuitry, the first scale factor based on a statistical measure of a subregion of numerical values of within a region of the plurality of regions and the second scale factor based on a statistical measure of the region that includes the subregion, and generate a quantized representation of the tensor via quantization of hierarchically scaled numerical values.

traducir