Resumen de: US20260030910A1
In a computer-implemented workflow, a submission of an asset localized for a first location is received. The asset may be intended for dissemination to a second location. A trained neural network is applied to the asset to determine a probability of recommending localization of the asset for the second location. This determination can be based on a plurality of features indicating contextual aspects of a document, which are identified in accordance with a plurality of transformations performed on the asset utilizing the trained neural network. Responsive to determining that the probability satisfies a condition, such as being a percentage above a threshold value, a recommendation is provided to exclude the asset from being localized to the second location.
Resumen de: US20260030500A1
A system for processing ultrasound images utilizes a trained orientation neural network to provide orientation information for a multiplicity of images captured around a body part, orienting each image with respect to a canonical view. In one aspect, the system includes a set creator and a generative neural network. The set creator generates sets of images and their associated transformations over time. The generative neural network then produces a summary canonical view set from these sets, showing changes during a body part cycle. In another aspect, the system includes a volume reconstructer. The volume reconstructer uses the orientation information to generate a volume representation of the body part from the oriented images using tomographic reconstruction, and to generate a canonical image from that volume representation.
Resumen de: US2025259062A1
A method for training a shape optimization neural network to produce an optimized point cloud defining desired shapes of materials with given properties is provided. The method comprises collecting a subject point cloud including points identified by their initial coordinates and material properties and jointly training a first neural network to iteratively modify a shape boundary by changing coordinates of a set of points in the subject point cloud to maximize an objective function and a second neural network to solve for physical fields by satisfying partial differential equations imposed by physics of the different materials of the subject point cloud having a shape produced by the changed coordinates output by the first neural network. The method also comprises outputting optimized coordinates of the set of points in the subject point cloud, produced by the trained first neural network.
Resumen de: US20260023950A1
Techniques are disclosed that enable generating jointly probable output by processing input using a multi-stream recurrent neural network transducer (MS RNN-T) model. Various implementations include generating a first output sequence and a second output sequence by processing a single input sequence using the MS RNN-T, where the first output sequence is jointly probable with the second output sequence. Additional or alternative techniques are disclosed that enable generating output by processing multiple input sequences using the MS RNN-T. Various implementations include processing a first input sequence and a second input sequence using the MS RNN-T to generate output. In some implementations, the MS RNN-T can be used to process two or more input sequences to generate two or more jointly probable output sequences.
Resumen de: US20260024249A1
An apparatus to facilitate augmenting temporal anti-aliasing with a neural network for history validation is disclosed. The apparatus includes a set of processing resources configured to perform augmented temporal anti-aliasing (TAA), the set of processing resources including circuitry configured to: receive, at a history validation neural network, inputs for a current pixel of a current frame and a reprojected pixel corresponding to the current pixel, the reprojected pixel originating from history data of the current frame; generate, using an output of the history validation neural network, a validated color for the current pixel based on current color data corresponding to the current pixel and history color data corresponding to the reprojected pixel; render an output frame using the validated color; and add the output frame to the history data.
Resumen de: US20260024188A1
A vision analytics and validation (VAV) system for providing an improved inspection of robotic assembly, the VAV system comprising a trained neural network three-way classifier, to classify each component as good, bad, or do not know, and an operator station configured to enable an operator to review an output of the trained neural network, and to determine whether a board including one or more “bad” or a “do not know” classified components passes review and is classified as good, or fails review and is classified as bad. In one embodiment, a retraining trigger to utilize the output of the operator station to train the trained neural network, based on the determination received from the operator station.
Resumen de: US20260017520A1
Apparatus and methods for training a neural network accelerator using quantized precision data formats are disclosed, and in particular for storing activation values from a neural network in a compressed format for use during forward and backward propagation training of the neural network. In certain examples of the disclosed technology, a computing system includes processors, memory, and a compressor in communication with the memory. The computing system is configured to perform forward propagation for a layer of a neural network to produced first activation values in a first block floating-point format. In some examples, activation values generated by forward propagation are converted by the compressor to a second block floating-point format having a narrower numerical precision than the first block floating-point format. The compressed activation values are stored in the memory, where they can be retrieved for use during back propagation.
Resumen de: WO2026015743A1
Deep learning has revolutionized image classification, robotics, life sciences, and other fields. However, the exponential growth in deep neural network parameters and data volumes has strained traditional computing architectures, primarily due to the data movement bottleneck. A Machine Intelligence on Wireless Edge Networks (MIWEN) approach for deep learning on ultra-low-power edge devices addresses this data movement bottleneck. MIWEN leverages disaggregated memory access to wirelessly stream machine learning (ML) models to edge devices, mitigating memory and power bottlenecks by integrating computation into the existing radio-frequency (RE) analog chain of the wireless transceivers on edge devices. MIWEN aims to achieve scalable and efficient implementations, significantly reducing energy consumption and latency compared to conventional digital signal processing systems.
Resumen de: US20260017517A1
A computer-implemented method and apparatus for feature selection using a distributed machine learning (ML) model in a network comprising a plurality of local computing devices and a central computing device is provided. The method includes training, at each local computing device, the ML model during one or more initial training rounds using a group of input features representing a input features layer of the ML model. The method further includes generating, at each local computing device, based on the one or more initial training rounds, feature group values. The method further includes transmitting, from each local computing device, to the central computing device, the generated feature group values. The method further includes receiving, at each local computing device, from the central computing device, central computing device gradients. The method further includes computing, at each local computing device, local computing device gradients, using the received central computing device gradients. The method further includes generating, at each local computing device, a gradient trajectory for each input feature in the group of input features based on the computed local computing device gradients. The method further includes identifying, at each local computing device, based on the generated gradient trajectory, whether each input feature in the group of input features is non-contributing. The method further includes removing, at each local computing device, from the group
Resumen de: WO2026012989A1
An industrial machine (100~4) performs an industrial process, and a state identifier (S2~4) that corresponds to the technical state of the machine is identified. A pre-trained converter module (600~4) receives a single-variate parameter time-series ({Pn}~4) and converts it to a language series ({Ln}~4), that is a series with language elements. A pre-trained generative transformer module (700~4), generates an extension (E~4) to the language series ({Ln}~4) based on a pre-trained probability of occurrence of language elements and provides the state identifier (S2~4) as a statement that comprises at least the extension (E~4).
Resumen de: US20260017516A1
A method of generating a neural network based open-domain dialogue model, includes receiving an input utterance from a device having a conversation with the dialogue model, obtaining a plurality of candidate replies to the input utterance from the dialogue model, determining a plurality of discriminator scores for the candidate replies based on reference-free discriminators, determining a plurality of quality score associated with the candidate replies, and training the dialogue model based on the quality scores.
Resumen de: EP4679271A2
A method of performing a reshape operation specified in a reshape layer of a neural network model is described. The reshape operation reshapes an input tensor with an input tensor shape to an output tensor with an output tensor shape. The tensor data that has to be reshaped is directly routed between tile memories of the hardware accelerator in an efficient manner. This advantageously optimizes usage of memory space and allows any number and type of neural network models to be run on the hardware accelerator.
Resumen de: EP4679287A1
A method and system for providing an intelligent response agent based on a sophisticated reasoning and speculation function according to an embodiment of the present disclosure can generate and provide response data for queries related to specialized documents using a deep-learning neural network that implements a stepwise process for a sophisticated reasoning and speculation function.
Resumen de: WO2024186551A1
An approach to structured knowledge modeling and the incorporation of learned knowledge in neural networks is disclosed. Knowledge is encoded in a knowledge base (KB) in a manner that is explicit and structured, such that it is human-interpretable, verifiable, and editable. Another neural network is able to read from and/or write to the knowledge model based on structured queries. The knowledge model has an interpretable property name-value structure, represented using property name embedding vectors and property value embedding vectors, such that an interpretable, structured query on the knowledge base may be formulated by a neural model in terms of tensor operations. The knowledge base admits gradient-based training or updates (of the knowledge base itself and/or a neural network(s) supported by the knowledge base), allowing knowledge or knowledge representations to be inferred from a training set using machine learning training methods.
Resumen de: WO2026009140A1
The present invention relates to an Al-driven system for generating style transfer fingerprints and compositions. It includes modules for integrating with sonic libraries, extracting metadata and audio features, and employing deep neural networks for style transfer. A style fingerprint generation module captures the artist's sonic characteristics, stored securely in a database linked to artist profiles. A composition generation module utilizes these fingerprints to create new audio compositions that authentically reflect the artist's unique style. The method involves connecting the artist's library, preprocessing audio, extracting features, training a style transfer model, generating a style fingerprint, and producing compositions.
Resumen de: US20260010536A1
Briefly stated, the invention is directed to retrieving a semantically matched knowledge structure. A question and answer pair is received, wherein the answer is received from a query of a search engine. A question is constraint-matched with the answer based on maximizing a plurality of constraints, wherein at least one of the plurality of the constraints is a similarity score between question and answer, wherein the constraint matching generates a matched sequence. For one or more answer sequences, a subsequence is found that are not parsed as answer slots. Query results are obtained from another search engine based on a combination of the answer or question, and the non-answer subsequence. And a KB based is refined on the query results and the constraint matching and based on a neural network training, for a further subsequent semantic matching, wherein the KB includes a dense semantic vector indication of concepts.
Resumen de: US20260010778A1
Systems, methods, and computer program products are provided for saving memory during training of knowledge graph neural networks. The method includes receiving a training dataset including a first set of knowledge graph embeddings associated with a plurality of entities for a first layer of a knowledge graph, inputting the training dataset into a knowledge graph neural network to generate at least one further set of knowledge graph embeddings associated with the plurality of entities for at least one further layer of the knowledge graph, quantizing the at least one further set of knowledge graph embeddings to provide at least one set of quantized knowledge graph embeddings, storing the at least one set of quantized knowledge graph embeddings in a memory, and dequantizing the at least one set of quantized knowledge graph embeddings to provide at least one set of dequantized knowledge graph embeddings.
Resumen de: US20260010583A1
A semantically generated video method and system receives a request to generate an imaginative scenario that is embodied by a video and receives a description of the imaginative scenario. The request and description are interpreted by one or more trained computer-implemented neural networks. The one or more trained neural networks are trained using a training set that comprises syntactical elements and images and learn during the training correspondences between each of a plurality of subsets of the syntactical elements and one or more patterns of pixels in the images. Representations of pixels are generated by applying the one or more trained neural networks in accordance with the learned training correspondences and contexts that are associated with the imaginative scenario. A video stream is provided to a user that includes the pixels.
Resumen de: US20260010755A1
Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for enabling a user to conduct a dialogue. Implementations of the system learn when to rely on supporting evidence, obtained from an external search system via a search system interface, and are also able to generate replies for the user that align with the preferences of a previously trained response selection neural network. Implementations of the system can also use a previously trained rule violation detection neural network to generate replies that take account of previously learnt rules.
Resumen de: US20260010809A1
Distribution of data in a neural network data set is used to determine an optimal compressor configuration for compressing the neural network data set and/or the underlying data type of the neural network data set. By using a generalizable optimization of examining the data prior to compressor invocation, the example non-limiting technology herein makes it possible to tune a compressor to better target the incoming data. For sparse data compression, this step may involve examining the distribution of data (e.g., in one example, zeros in the data). For other algorithms, it may involve other types of inspection. This changes the fundamental behavior of the compressor itself. By inspecting the distribution of data (e.g., zeros in the data), it also possible to very accurately predict the data width of the underlying data. This is useful because this data type is not always known a priori, and lossy compression algorithms useful for deep learning depend on knowing the true data type to achieve good compression rates.
Resumen de: US20260010810A1
A semantically interpreted video method, system, and apparatus access a video and receives a request to provide information with respect to objects that are depicted in the video. The request is interpreted by applying a trained neural network that is trained to learn to make semantic-based inferences from syntactical elements. A trained neural network is applied to identify the one or more objects in the video in accordance with the interpretation of the syntactical elements and training in which correspondences between syntactical elements and patterns of pixels are learned. A communication that includes attributes associated with each of the identified objects is generated by applying a trained neural network and is delivered to a user.
Resumen de: US20260010969A1
One embodiment provides for a graphics processing unit to perform computations associated with a neural network, the graphics processing unit comprising a hardware processing unit having a dynamic precision fixed-point unit that is configurable to convert elements of a floating-point tensor to convert the floating-point tensor into a fixed-point tensor.
Nº publicación: WO2026003488A1 02/01/2026
Solicitante:
BAE SYSTEMS PLC [GB]
BAE SYSTEMS PLC
Resumen de: WO2026003488A1
The present invention relates to a computer-implemented method of training a pursuer neural network agent. The method comprising: providing an evader neural network agent trained to update iteratively a plurality of dynamic parameters of an evader, a pursuer neural network agent initialised to update iteratively a plurality of approximators for a pursuer, and an ordinary differential equation solver configured to update a plurality of dynamic parameters of the pursuer based on the updated approximators; sampling initial dynamic parameters from a set of initial dynamic parameters of the evader; running a simulation involving the pursuer neural network agent pursuing the evader neural network agent based on the sampled initial dynamic parameters; computing a loss based on the dynamic parameters of the pursuer from the ordinary differential equation solver and the dynamic parameters of the evader from the evader neural network agent; and optimising weights of the pursuer neural network agent based on the computed loss.