Resumen de: WO2026009140A1
The present invention relates to an Al-driven system for generating style transfer fingerprints and compositions. It includes modules for integrating with sonic libraries, extracting metadata and audio features, and employing deep neural networks for style transfer. A style fingerprint generation module captures the artist's sonic characteristics, stored securely in a database linked to artist profiles. A composition generation module utilizes these fingerprints to create new audio compositions that authentically reflect the artist's unique style. The method involves connecting the artist's library, preprocessing audio, extracting features, training a style transfer model, generating a style fingerprint, and producing compositions.
Resumen de: WO2026003488A1
The present invention relates to a computer-implemented method of training a pursuer neural network agent. The method comprising: providing an evader neural network agent trained to update iteratively a plurality of dynamic parameters of an evader, a pursuer neural network agent initialised to update iteratively a plurality of approximators for a pursuer, and an ordinary differential equation solver configured to update a plurality of dynamic parameters of the pursuer based on the updated approximators; sampling initial dynamic parameters from a set of initial dynamic parameters of the evader; running a simulation involving the pursuer neural network agent pursuing the evader neural network agent based on the sampled initial dynamic parameters; computing a loss based on the dynamic parameters of the pursuer from the ordinary differential equation solver and the dynamic parameters of the evader from the evader neural network agent; and optimising weights of the pursuer neural network agent based on the computed loss.
Resumen de: EP4672031A1
Methods, systems, and apparatuses, including computer programs encoded on computer storage media, for receiving a query relating to a data item that includes multiple data item samples and processing the query and the data item to generate a response to the query. In particular, the described techniques include adaptively selecting a subset of the data item samples using a selection neural network conditioned on features of the data item samples and the query. Then processing the subset and query using a downstream task neural network to generate a response to the query. By adaptively selecting the subset of data item samples according to the query, the described techniques generate responses to queries that are more accurate and require less computation resources than would be the case using other techniques.
Resumen de: EP4672089A1
Examples of the present disclosure involve a generation method, a usage method, a device, an electronic device, and a computer program product of a graph neural network. The generation method comprises generating at least one categorical embedding based on target domain knowledge. The generation method further comprises determining node embeddings of the training graph using at least one categorical embedding and original node features of the training graph. The generation method further comprises generating a graph neural network using the determined node embeddings of the training graph. The method of generating a graph neural network consistent with examples of the present disclosure are capable of infusing domain knowledge into a graph neural network such that the graph neural network understands the domain knowledge. In this way, the graph neural network can understand the true semantic and contextual information of the target domain, thus granting it domain generality.
Resumen de: EP4672084A1
The present invention relates to a computer-implemented method of training a pursuer neural network agent. The method comprising: providing an evader neural network agent trained to update iteratively a plurality of dynamic parameters of an evader, a pursuer neural network agent initialised to update iteratively a plurality of approximators for a pursuer, and an ordinary differential equation solver configured to update a plurality of dynamic parameters of the pursuer based on the updated approximators; sampling initial dynamic parameters from a set of initial dynamic parameters of the evader; running a simulation involving the pursuer neural network agent pursuing the evader neural network agent based on the sampled initial dynamic parameters; computing a loss based on the dynamic parameters of the pursuer from the ordinary differential equation solver and the dynamic parameters of the evader from the evader neural network agent; and optimising weights of the pursuer neural network agent based on the computed loss.
Resumen de: WO2025262917A1
This proof verification device includes: a forward propagation unit that guarantees, by zero-knowledge proof, the correctness of computation in forward propagation processing of a batch normalization system of a neural network or a deep neural network; and an inverse propagation unit that guarantees, by zero-knowledge proof, the correctness of computation in the inverse propagation processing of the neural network or deep neural network.
Resumen de: US2025390106A1
An embodiment relates to a robot executing a social-friendly navigation algorithm. The robot may include a communication unit, an input unit, a driving unit configured to move the robot, a memory, and at least one processor connected to the memory and configured to execute computer-readable instructions stored in the memory. By performing neural network computation using a separate processor and utilizing multiple processors in parallel, device efficiency may be improved.
Resumen de: US2025390717A1
An inference processing device includes: a division unit that divides a layer of a convolutional neural network into a plurality of sublayers in a channel direction; a convolution unit that executes convolution processing for each of the sublayers to output a convolution result; an addition unit that adds an intermediate value obtained by cumulatively adding convolution results up to a previous sublayer to the convolution result with an adder for adding a bias to the convolution result every time the convolution processing is executed, and outputs an addition result; and an activation unit that inputs, to an activation function, the addition result obtained by adding the convolution result of a last sublayer on which the convolution processing has been executed last.
Resumen de: US2025390745A1
Systems and methods, for selecting a neural network for a machine learning (ML) problem, are disclosed. A method includes accessing an input matrix, and accessing an ML problem space associated with an ML problem and multiple untrained candidate neural networks for solving the ML problem. The method includes computing, for each untrained candidate neural network, at least one expressivity measure capturing an expressivity of the candidate neural network with respect to the ML problem. The method includes computing, for each untrained candidate neural network, at least one trainability measure capturing a trainability of the candidate neural network with respect to the ML problem. The method includes selecting, based on the at least one expressivity measure and the at least one trainability measure, at least one candidate neural network for solving the ML problem. The method includes providing an output representing the selected at least one candidate neural network.
Resumen de: US2025390532A1
Methods, systems, and apparatuses, including computer programs encoded on computer storage media, for receiving a query relating to a data item that includes multiple data item samples and processing the query and the data item to generate a response to the query. In particular, the described techniques include adaptively selecting a subset of the data item samples using a selection neural network conditioned on features of the data item samples and the query. Then processing the subset and query using a downstream task neural network to generate a response to the query. By adaptively selecting the subset of data item samples according to the query, the described techniques generate responses to queries that are more accurate and require less computation resources than would be the case using other techniques.
Resumen de: US2025384350A1
Systems, methods, and computer readable media related to: training an encoder model that can be utilized to determine semantic similarity of a natural language textual string to each of one or more additional natural language textual strings (directly and/or indirectly); and/or using a trained encoder model to determine one or more responsive actions to perform in response to a natural language query. The encoder model is a machine learning model, such as a neural network model. In some implementations of training the encoder model, the encoder model is trained as part of a larger network architecture trained based on one or more tasks that are distinct from a “semantic textual similarity” task for which the encoder model can be used.
Resumen de: WO2025260090A1
Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training a generative neural network. In particular, the generative neural network is trained on an objective function that includes multiple different objectives, with two or more of the objectives being reward objectives.
Resumen de: US2025384277A1
A device and a method for training a neural network based decoder. The method includes during the training, quantizing, using a training quantizer, parameters representative of the coefficients of the neural network based decoder. A method and device are also provided for encoding at least parameters representative of the coefficients of a neural network based decoder. Provided also are a method for generating an encoded bitstream including an encoded neural network based decoder, a neural network based encoder and decoder, and a signal encoded using the neural network based encoder.
Resumen de: US2025384266A1
A computer-implemented method for automatically creating a digital twin of an industrial system having one or more devices includes accessing a triple store that includes an aggregated ontology of graph-based industrial data synchronized with the one or more devices. The triple store is queried for a specified device to extract, from the graph-based industrial data, structural information of the specified device defined by a tree comprising a hierarchy of nodes. For each node, a neural network element is assigned based on a mapping of node types to pre-defined neural network elements. The assigned neural network elements are combined based on the tree topology to create a digital twin neural network. The triple store is then queried to extract, form the graph-based industrial data, real-time process data gathered from the specified device at runtime and use the extracted real-time process data to tune parameters of the digital twin neural network.
Resumen de: WO2025257015A1
The invention relates to a computer-implemented method for operating a recommendation system which involves a) converting at least one part of at least one 3D CAD model capable of acquiring so-called "product manufacturing information", PMI, data into a graph representation in such a way that the graph representation comprises existing PMI data of the 3D CAD model, b) training a so-called "graph neural network", GNN, model at least at a first, in particular initial, point in time, the GNN model being trained on the basis of at least the graph representation, c) generating at least one recommendation output concerning at least the part of the 3D CAD model, in particular an output by the recommendation system on the basis of the graph representation and the GNN model. Furthermore, the invention relates to an arrangement for carrying out the method, to a computer-readable data carrier, and to a computer program product.
Resumen de: US2025384257A1
An apparatus to facilitate processing of a sparse matrix for arbitrary graph data is disclosed. The apparatus includes a graphics processing unit having a data management unit (DMU) that includes a scheduler for scheduling matrix operations, an active logic for tracking active input operands, and a skip logic for tracking unimportant input operands to be skipped by the scheduler. Processing circuitry is coupled to the DMU. The processing circuitry comprises a plurality of processing elements including logic to read operands and a multiplication unit to multiply two or more operands for the arbitrary graph data and customizable circuitry to provide custom functions.
Nº publicación: US2025384661A1 18/12/2025
Solicitante:
VERITONE INC [US]
Veritone, Inc
Resumen de: US2025384661A1
Methods and systems for training one or more neural networks for transcription and for transcribing a media file using the trained one or more neural networks are provided. One of the methods includes: segmenting the media file into a plurality of segments; inputting each segment, one segment at a time, of the plurality of segments into a first neural network trained to perform speech recognition; extracting outputs, one segment at a time, from one or more layers of the first neural network; and training a second neural network to generate a predicted-WER (word error rate) of a plurality of transcription engines for each segment based at least on outputs from the one or more layers of the first neural network.