Ministerio de Industria, Turismo y Comercio LogoMinisterior
 

Sare neuronalak

Resultados 16 resultados
LastUpdate Última actualización 08/02/2026 [07:35:00]
pdfxls
Solicitudes publicadas en los últimos 30 días / Applications published in the last 30 days
Resultados 1 a 16  

SYSTEM AND METHOD FOR PROCESSING ULTRASOUND IMAGES

NºPublicación:  US20260030500A1 29/01/2026
Solicitante: 
NEW YORK UNIV [US]
YEDA RES AND DEVELOPMENT CO LTD [IL]
New York University,
Yeda Research And Development Co. Ltd
US_20260030500_PA

Resumen de: US20260030500A1

A system for processing ultrasound images utilizes a trained orientation neural network to provide orientation information for a multiplicity of images captured around a body part, orienting each image with respect to a canonical view. In one aspect, the system includes a set creator and a generative neural network. The set creator generates sets of images and their associated transformations over time. The generative neural network then produces a summary canonical view set from these sets, showing changes during a body part cycle. In another aspect, the system includes a volume reconstructer. The volume reconstructer uses the orientation information to generate a volume representation of the body part from the oriented images using tomographic reconstruction, and to generate a canonical image from that volume representation.

IMAGE LOCALIZABILITY CLASSIFIER

NºPublicación:  US20260030910A1 29/01/2026
Solicitante: 
ADOBE INC [US]
Adobe Inc
US_20260030910_PA

Resumen de: US20260030910A1

In a computer-implemented workflow, a submission of an asset localized for a first location is received. The asset may be intended for dissemination to a second location. A trained neural network is applied to the asset to determine a probability of recommending localization of the asset for the second location. This determination can be based on a plurality of features indicating contextual aspects of a document, which are identified in accordance with a plurality of transformations performed on the asset utilizing the trained neural network. Responsive to determining that the probability satisfies a condition, such as being a percentage above a threshold value, a recommendation is provided to exclude the asset from being localized to the second location.

SYSTEMS AND METHODS FOR SHAPE OPTIMIZATION OF STRUCTURES USING PHYSICS INFORMED NEURAL NETWORKS

NºPublicación:  EP4684316A1 28/01/2026
Solicitante: 
MITSUBISHI ELECTRIC MOBILITY CORP [JP]
Mitsubishi Electric Mobility Corporation
US_2025259062_PA

Resumen de: US2025259062A1

A method for training a shape optimization neural network to produce an optimized point cloud defining desired shapes of materials with given properties is provided. The method comprises collecting a subject point cloud including points identified by their initial coordinates and material properties and jointly training a first neural network to iteratively modify a shape boundary by changing coordinates of a set of points in the subject point cloud to maximize an objective function and a second neural network to solve for physical fields by satisfying partial differential equations imposed by physics of the different materials of the subject point cloud having a shape produced by the changed coordinates output by the first neural network. The method also comprises outputting optimized coordinates of the set of points in the subject point cloud, produced by the trained first neural network.

Image Analysis System for Testing in Manufacturing

NºPublicación:  US20260024188A1 22/01/2026
Solicitante: 
BRIGHT MACHINES INC [US]
Bright Machines, Inc
US_20260024188_PA

Resumen de: US20260024188A1

A vision analytics and validation (VAV) system for providing an improved inspection of robotic assembly, the VAV system comprising a trained neural network three-way classifier, to classify each component as good, bad, or do not know, and an operator station configured to enable an operator to review an output of the trained neural network, and to determine whether a board including one or more “bad” or a “do not know” classified components passes review and is classified as good, or fails review and is classified as bad. In one embodiment, a retraining trigger to utilize the output of the operator station to train the trained neural network, based on the determination received from the operator station.

AUGMENTING TEMPORAL ANTI-ALIASING WITH A NEURAL NETWORK FOR HISTORY VALIDATION

NºPublicación:  US20260024249A1 22/01/2026
Solicitante: 
INTEL CORP [US]
Intel Corporation
US_20260024249_PA

Resumen de: US20260024249A1

An apparatus to facilitate augmenting temporal anti-aliasing with a neural network for history validation is disclosed. The apparatus includes a set of processing resources configured to perform augmented temporal anti-aliasing (TAA), the set of processing resources including circuitry configured to: receive, at a history validation neural network, inputs for a current pixel of a current frame and a reprojected pixel corresponding to the current pixel, the reprojected pixel originating from history data of the current frame; generate, using an output of the history validation neural network, a validated color for the current pixel based on current color data corresponding to the current pixel and history color data corresponding to the reprojected pixel; render an output frame using the validated color; and add the output frame to the history data.

MULTI-STREAM RECURRENT NEURAL NETWORK TRANSDUCER(S)

NºPublicación:  US20260023950A1 22/01/2026
Solicitante: 
GOOGLE LLC [US]
GOOGLE LLC
US_20260023950_PA

Resumen de: US20260023950A1

Techniques are disclosed that enable generating jointly probable output by processing input using a multi-stream recurrent neural network transducer (MS RNN-T) model. Various implementations include generating a first output sequence and a second output sequence by processing a single input sequence using the MS RNN-T, where the first output sequence is jointly probable with the second output sequence. Additional or alternative techniques are disclosed that enable generating output by processing multiple input sequences using the MS RNN-T. Various implementations include processing a first input sequence and a second input sequence using the MS RNN-T to generate output. In some implementations, the MS RNN-T can be used to process two or more input sequences to generate two or more jointly probable output sequences.

METHOD DEVICE AND PROGRAM FOR LEARNING ARTIFICIAL NEURAL NETWORKS BASED ON SPEECH IMAGINATION BIOSIGNALS AND PHONEME INFORMATION

NºPublicación:  KR20260009627A 20/01/2026
Solicitante: 
고려대학교산학협력단
US_20260018162_PA

Resumen de: US20260018162A1

A method for learning an artificial neural network based on speech imagination biosignals and phoneme information according to one embodiment of the present disclosure may comprise the steps of collecting speech imagination biosignals; labeling the collected speech imagination biosignals with phoneme information; pre-processing the labeled speech imagination biosignals; extracting feature vectors of the pre-processed speech imagination biosignals; and learning the extracted feature vectors through an artificial neural network to generate a classification model, wherein the pre-processing includes windowing to cut the labeled speech imagination biosignals in phoneme units, and the learning includes labeling a phoneme information for the feature vectors extracted in phoneme units.

MACHINE INTELLIGENCE ON WIRELESS EDGE NETWORKS

NºPublicación:  WO2026015743A1 15/01/2026
Solicitante: 
MASSACHUSETTS INSTITUTE OF TECH [US]
MASSACHUSETTS INSTITUTE OF TECHNOLOGY
WO_2026015743_PA

Resumen de: WO2026015743A1

Deep learning has revolutionized image classification, robotics, life sciences, and other fields. However, the exponential growth in deep neural network parameters and data volumes has strained traditional computing architectures, primarily due to the data movement bottleneck. A Machine Intelligence on Wireless Edge Networks (MIWEN) approach for deep learning on ultra-low-power edge devices addresses this data movement bottleneck. MIWEN leverages disaggregated memory access to wirelessly stream machine learning (ML) models to edge devices, mitigating memory and power bottlenecks by integrating computation into the existing radio-frequency (RE) analog chain of the wireless transceivers on edge devices. MIWEN aims to achieve scalable and efficient implementations, significantly reducing energy consumption and latency compared to conventional digital signal processing systems.

METHOD FOR DETECTING FRAUD IN FINANCIAL TRANSACTIONS

NºPublicación:  US20260017660A1 15/01/2026
Solicitante: 
RAPTORXAI PRIVATE LTD [IN]
RAPTORXAI PRIVATE LIMITED
US_20260017660_PA

Resumen de: US20260017660A1

The invention provides a method for detecting fraud in financial transactions using a graph link attention network. The method involves constructing a transaction network where nodes represent transaction accounts and links represent transaction behaviors. Node features are extracted through a linear neural network, resulting in transformed node features. Link features are extracted and processed using a multi-head attention mechanism to generate link importance scores, with each score indicating the impact of the link on its corresponding node, and the total importance scores for each node summing to one. These transformed node features and link importance scores are combined to form mixed features, which are then utilized to identify fraudulent transactions within the transaction network. This approach enhances the accuracy and efficiency of fraud detection by focusing on the critical links in the transaction network.

AUTOMATED FEATURE SELECTION FOR SPLIT NEURAL NETWORKS

NºPublicación:  US20260017517A1 15/01/2026
Solicitante: 
TELEFONAKTIEBOLAGET LM ERICSSON PUBL [SE]
Telefonaktiebolaget LM Ericsson (publ)
US_20260017517_PA

Resumen de: US20260017517A1

A computer-implemented method and apparatus for feature selection using a distributed machine learning (ML) model in a network comprising a plurality of local computing devices and a central computing device is provided. The method includes training, at each local computing device, the ML model during one or more initial training rounds using a group of input features representing a input features layer of the ML model. The method further includes generating, at each local computing device, based on the one or more initial training rounds, feature group values. The method further includes transmitting, from each local computing device, to the central computing device, the generated feature group values. The method further includes receiving, at each local computing device, from the central computing device, central computing device gradients. The method further includes computing, at each local computing device, local computing device gradients, using the received central computing device gradients. The method further includes generating, at each local computing device, a gradient trajectory for each input feature in the group of input features based on the computed local computing device gradients. The method further includes identifying, at each local computing device, based on the generated gradient trajectory, whether each input feature in the group of input features is non-contributing. The method further includes removing, at each local computing device, from the group

COMPRESSION AND STORAGE OF NEURAL NETWORK ACTIVATIONS FOR BACKPROPAGATION

NºPublicación:  US20260017520A1 15/01/2026
Solicitante: 
MICROSOFT TECH LICENSING LLC [US]
Microsoft Technology Licensing, LLC
US_20260017520_PA

Resumen de: US20260017520A1

Apparatus and methods for training a neural network accelerator using quantized precision data formats are disclosed, and in particular for storing activation values from a neural network in a compressed format for use during forward and backward propagation training of the neural network. In certain examples of the disclosed technology, a computing system includes processors, memory, and a compressor in communication with the memory. The computing system is configured to perform forward propagation for a layer of a neural network to produced first activation values in a first block floating-point format. In some examples, activation values generated by forward propagation are converted by the compressor to a second block floating-point format having a narrower numerical precision than the first block floating-point format. The compressed activation values are stored in the memory, where they can be retrieved for use during back propagation.

DIALOGUE MODEL TRAINING BASED ON REFERENCE-FREE DISCRIMINATORS

NºPublicación:  US20260017516A1 15/01/2026
Solicitante: 
TENCENT AMERICA LLC [US]
TENCENT AMERICA LLC
US_20260017516_PA

Resumen de: US20260017516A1

A method of generating a neural network based open-domain dialogue model, includes receiving an input utterance from a device having a conversation with the dialogue model, obtaining a plurality of candidate replies to the input utterance from the dialogue model, determining a plurality of discriminator scores for the candidate replies based on reference-free discriminators, determining a plurality of quality score associated with the candidate replies, and training the dialogue model based on the quality scores.

CONVERTING PARAMETER TIME-SERIES TO LANGUAGE ELEMENTS IN ORDER TO FEED NEURAL NETWORKS THAT ANALYZE THE OPERATION OF INDUSTRIAL MACHINES

NºPublicación:  WO2026012989A1 15/01/2026
Solicitante: 
PAUL WURTH S A [LU]
PAUL WURTH S.A
WO_2026012989_PA

Resumen de: WO2026012989A1

An industrial machine (100~4) performs an industrial process, and a state identifier (S2~4) that corresponds to the technical state of the machine is identified. A pre-trained converter module (600~4) receives a single-variate parameter time-series ({Pn}~4) and converts it to a language series ({Ln}~4), that is a series with language elements. A pre-trained generative transformer module (700~4), generates an extension (E~4) to the language series ({Ln}~4) based on a pre-trained probability of occurrence of language elements and provides the state identifier (S2~4) as a statement that comprises at least the extension (E~4).

INCORPORATING STRUCTURED KNOWLEDGE IN NEURAL NETWORKS

NºPublicación:  EP4677480A1 14/01/2026
Solicitante: 
MICROSOFT TECHNOLOGY LICENSING LLC [US]
Microsoft Technology Licensing, LLC
WO_2024186551_PA

Resumen de: WO2024186551A1

An approach to structured knowledge modeling and the incorporation of learned knowledge in neural networks is disclosed. Knowledge is encoded in a knowledge base (KB) in a manner that is explicit and structured, such that it is human-interpretable, verifiable, and editable. Another neural network is able to read from and/or write to the knowledge model based on structured queries. The knowledge model has an interpretable property name-value structure, represented using property name embedding vectors and property value embedding vectors, such that an interpretable, structured query on the knowledge base may be formulated by a neural model in terms of tensor operations. The knowledge base admits gradient-based training or updates (of the knowledge base itself and/or a neural network(s) supported by the knowledge base), allowing knowledge or knowledge representations to be inferred from a training set using machine learning training methods.

METHOD FOR PROVIDING INTELLIGENT RESPONSE AGENT BASED ON ADVANCED INFERENCE AND ESTIMATION FUNCTION, AND SYSTEM THEREFOR

NºPublicación:  EP4679287A1 14/01/2026
Solicitante: 
LG MAN DEVELOPMENT INSTITUTE CO LTD [KR]
LG Management Development Institute Co., Ltd
EP_4679287_A1

Resumen de: EP4679287A1

A method and system for providing an intelligent response agent based on a sophisticated reasoning and speculation function according to an embodiment of the present disclosure can generate and provide response data for queries related to specialized documents using a deep-learning neural network that implements a stepwise process for a sophisticated reasoning and speculation function.

MOVEMENT OF TENSOR DATA DURING RESHAPE OPERATION

Nº publicación: EP4679271A2 14/01/2026

Solicitante:

GOOGLE LLC [US]
GOOGLE LLC

EP_4679271_PA

Resumen de: EP4679271A2

A method of performing a reshape operation specified in a reshape layer of a neural network model is described. The reshape operation reshapes an input tensor with an input tensor shape to an output tensor with an output tensor shape. The tensor data that has to be reshaped is directly routed between tile memories of the hardware accelerator in an efficient manner. This advantageously optimizes usage of memory space and allows any number and type of neural network models to be run on the hardware accelerator.

traducir