Absstract of: US20260086912A1
The present disclosure relates to methods and systems for providing inferences using machine learning systems. The methods and systems receive a load forecast for processing requests by a machine learning model and split the machine learning model into a plurality machine learning model portions based on the load forecast. The methods and systems determine a batch size for the requests for the machine learning model portions. The methods and systems use one or more available resources to execute the plurality of machine learning model portions to process the requests and generate inferences for the requests.
Absstract of: US20260088022A1
Systems and methods are disclosed for generating internal state representations of a neural network during processing and using the internal state representations for classification or search. In some embodiments, the internal state representations are generated from the output activation functions of a subset of nodes of the neural network. The internal state representations may be used for classification by training a classification model using internal state representations and corresponding classifications. The internal state representations may be used for search, by producing a search feature from an search input and comparing the search feature with one or more feature representations to find the feature representation with the highest degree of similarity.
Absstract of: US20260088021A1
Apparatuses, systems, and techniques to facilitate understanding of media content using neural networks to adjust playback speed and volume based on environmental and other factors. In at least one embodiment, playback of media content is slowed down or sped up if audio associated with said media content is difficult to understand based on background noise, accent, difficulty of material, as well as other factors that decrease understandability of media content.
Absstract of: US20260088023A1
A method for training hotword detection includes receiving a training input audio sequence including a sequence of input frames that define a hotword that initiates a wake-up process on a device. The method also includes feeding the training input audio sequence into an encoder and a decoder of a memorized neural network. Each of the encoder and the decoder of the memorized neural network include sequentially-stacked single value decomposition filter (SVDF) layers. The method further includes generating a logit at each of the encoder and the decoder based on the training input audio sequence. For each of the encoder and the decoder, the method includes smoothing each respective logit generated from the training input audio sequence, determining a max pooling loss from a probability distribution based on each respective logit, and optimizing the encoder and the decoder based on all max pooling losses associated with the training input audio sequence.
Absstract of: US20260086524A1
Embodiments of the present disclosure relate to generating controller logic. Indication of a controller logic generation request associated with an asset identifier may be received. A prompt template set associated with a controller logic generation workflow may be identified based on the asset identifier. The prompt template of the prompt template set may comprise one or more instruction sets. The prompt template set may be input into a large language model comprising one or more transformer neural networks and configured to generate a controller logic configuration file for the asset identifier based on the prompt template set and intent classification associated with each prompt template. The controller logic configuration file may be received from the large language model. Performance of one or more prediction-based actions may be initiated based on the controller logic configuration file.
Absstract of: US20260079456A1
A computer-implemented method includes receiving, at a neural network, input data indicating one or more tasks associated with production, wherein the neural network is integrated with cognitive architecture that includes an imaginal memory buffer, utilizing the input data indicating one or more tasks with one or more production rule sets associated with an expert decision, obtain goal data indicating the expert decision utilizing imaginal memory buffer, selecting, from the imaginal memory buffer, one or more sectors associated with goal data indicating the novice decision, goal data indicating the intermediate decision, and goal data indicating the expert decision to obtain data indicating decision-making results, and in response to meeting a convergence threshold utilizing the data indicating decision-making results, outputting a simulation associated with a recommendation indicating information associated with at least the input data indicating one or more tasks associated with production.
Absstract of: US20260080313A1
A system receives domain specific questions from users and answers them. The system stores domain specific information comprising domain specific facts and domain specific programs. The system receives an input request to perform a domain specific task for the particular domain. The system provides the input request to a machine learning model trained to predict a score indicating whether the input request should be processed by a symbolic processor or by a neural network. If the score predicted by the machine learning model indicates that the input request should be processed by the symbolic processor, the system determines whether a stored domain specific program can solve the input request. If none of the stored domain specific programs can solve the input request, the system generates a new program for solving the input request using a machine learning based language model and the set of domain specific facts.
Absstract of: EP4711869A1
The present invention relates to a multi-task real-time inference scheduling system and real-time inference scheduling method of a machine tool, wherein a central control unit is connected to each of one or more individual control units through a network, receives a use context of each machine tool through each individual control unit, generates a multi-task learning model through a neural network, infers multiple tasks required to be performed by the individual control unit of each machine tool through machine learning by using real-time use contexts collected during operation of the machine tool by a use scenario, and schedules the multiple tasks of the machine tool through machine learning.
Absstract of: WO2026053443A1
This visualization device is configured to comprise: a function acquisition unit (1) which, when input data is given to an input layer, acquires a model function that is a function obtained by modeling the computation of a plurality of intermediate layers of a neural network that outputs an analysis result of the input data from an output layer; a contribution level calculation unit (2) that calculates the contribution level of each of the intermediate layers to the analysis result on the basis of the model function acquired by the function acquisition unit (1); and a map generation unit (3) that generates a visualization map of each of the intermediate layers on the basis of the contribution level calculated by the contribution level calculation unit (2). When the number of the plurality of intermediate layers is M (M is an integer of 2 or more), the contribution level calculation unit (2) repeatedly performs an update process for updating the value of the m-th (m=1, ..., M) intermediate layer from the input layer side on the basis of the model function, and calculates the contribution level of the m-th intermediate layer on the basis of the step interval between the step of the s-th (s is an integer of 1 or more) update process and the step of the (s+1)-th update process, and a gradient related to the value of the m-th intermediate layer after the s-th update process.
Absstract of: US20260071878A1
In various examples, live perception from sensors of a vehicle may be leveraged to generate potential paths for the vehicle to navigate an intersection in real-time or near real-time. For example, a deep neural network (DNN) may be trained to compute various outputs—such as heat maps corresponding to key points associated with the intersection, vector fields corresponding to directionality, heading, and offsets with respect to lanes, intensity maps corresponding to widths of lanes, and/or classifications corresponding to line segments of the intersection. The outputs may be decoded and/or otherwise post-processed to reconstruct an intersection—or key points corresponding thereto—and to determine proposed or potential paths for navigating the vehicle through the intersection.
Absstract of: US20260073182A1
Embodiments relate to streaming convolution operations in a neural processor circuit that includes a neural engine circuit and a neural task manager. The neural task manager obtains multiple task descriptors and multiple subtask descriptors. Each task descriptor identifies a respective set of the convolution operations of a respective layer of a set of layers. Each subtask descriptor identifies a corresponding task descriptor and a subset of the convolution operations on a portion of a layer of the set of layers identified by the corresponding task descriptor. The neural processor circuit configures the neural engine circuit for execution of the subset of the convolution operations using the corresponding task descriptor. The neural engine circuit performs the subset of the convolution operations to generate output data that correspond to input data of another subset of the convolution operations identified by another subtask descriptor from the list of subtask descriptors.
Absstract of: WO2026053001A1
Systems and methods for image recognition using computer vision devices with a field- programming gate array (FPGA) and a convolutional neural network (CNN). A first CNN configuration is used for object class detection. Alternative CNN configurations are loaded for precise object classification and identification in real time under FPGA control.
Absstract of: EP4707735A1
Techniques for localizing a vehicle in real time using dynamic uncertainty estimates are presented. The techniques include obtaining a terrain image captured by the vehicle; passing the terrain image to a trained evidential deep learning neural network subsystem, from which a dynamic uncertainty value and a first feature vector are obtained in real time; for each of a plurality of candidate terrain locations, comparing the first feature vector to a respective second feature vector representative of a candidate terrain location, from which a respective similarity score is obtained; for at least one of the plurality of candidate terrain locations, updating in real time, by a recursive Bayesian estimator, a respective location weight based on the dynamic uncertainty value and the respective similarity score; estimating, in real time, a location of the vehicle based on the plurality of location weights; and providing the location of the vehicle.
Absstract of: EP4708115A1
Provided are a computing device and method for assigning a generator to a semiconductor layout and a method of training a neural network. The former method includes an input operation of receiving a layout by a computing device, a division operation of dividing the layout into a plurality of channels, a conversion operation of converting each of the divided channels into a matrix, and an inference operation of inferring a generator to be assigned to the layout from the matrix. The inference operation is performed by displaying one or more generator candidates corresponding to the received layout.
Absstract of: WO2026046493A1
A method for a base station associated with one or more terminal devices comprising determining an initial state at a first time, a resource allocation for the one or more terminal devices associated with the base station based on a Deep Neural Network (DNN) policy, causing the resource allocation to be executed, determining a resulting state at a second time preceding the execution of the resource allocation, a local reward for the base station, and local importance values for a current transition and historical transitions. Further, receiving neighbouring importance values for a current transition and historical transitions of a neighbouring base station, determining global importance values based on the local importance values and neighbouring importance values. Determining which transition of a current transition and historical transitions has the lowest global importance value and drop it to store the remaining transitions. Receiving neighbouring rewards for previous transitions and training the DNN policy based on previous transitions and the received neighbouring rewards.
Absstract of: US20260064726A1
A method and system for providing an intelligent response agent based on a sophisticated reasoning and speculation function can generate and provide response data for queries related to specialized documents using a deep-learning neural network that implements a stepwise process for a sophisticated reasoning and speculation function.
Absstract of: WO2026049296A1
An electronic device is disclosed. The present electronic device comprises: a neural network model including an acoustic encoder and a text encoder trained on relationships between a plurality of first sample sounds and a plurality of sample texts; a memory for storing instructions; and at least one processor including processing circuitry, wherein the instructions, when executed individually or collectively by the at least one processor, may: input target text corresponding to a target keyword into the text encoder to obtain a text embedding; and retrain the neural network model on the basis of a plurality of second sample sounds including the target keyword and the text embedding to obtain a final neural network model in which the acoustic encoder is updated.
Absstract of: US20260065662A1
An electronic device includes: a processor; and a memory storing instructions. By executing the instructions, the processor is configured to: receive a first image, recognize a plurality of objects in the first image to generate object information representing the plurality of objects, generate an object relationship graph including relationships between the plurality of objects, based on the first image and the object information, obtain image effect data including image effects to be respectively applied to the plurality of objects by inputting the object relationship graph to an image modification Graph Neural Network (GNN) model, and generate a modified image based on the first image, the object information, and the image effect data.
Absstract of: WO2026046490A1
A method for a base station associated with one or more terminal devices comprising determining an initial state at a first time, determining a resource allocation for the one or more terminal devices associated with the base station based on a Deep Neural Network, DNN, policy, causing the resource allocation to be executed, determining a resulting state at a second time preceding the execution of the resource allocation, determining a local reward for the base station, receiving a neighbouring reward from a neighbouring base station, determining a group reward based on the local reward and the received neighbouring reward, receiving a previous neighbouring hyper parameter, updating a local hyper parameter based on the previous neighbouring hyper parameter and the group reward, wherein updating the local hyper parameter utilizes a hyper DNN, policy and training the DNN policy based on transitions of the base station and the updated local hyper parameter.
Absstract of: WO2026046614A1
The invention relates to a system for a vehicle (0), comprising a computing module (1) having a neural network and an input data interface (3) designed to provide past movements of other road users and current sensor data, wherein a first (5) and a second (7) sub-network make a location-related or a time-related prediction of the future behavior of other road users, and the input data interface (3) carries out reinforcement learning with the aid of an anomaly detection unit (9) for adapting weightings of a machine learning attention mechanism.
Absstract of: WO2026050564A1
A method for generating a classifier, comprising: initializing a neural network with a fully connected architecture; applying a regularization constraint to the set of weights between neurons in the input layer and the plurality of latent features in the hidden layer; iteratively reducing a number of incoming connections to each latent feature in the hidden layer to a predetermined number based on the regularized first set of weights; weighting loss function's value based on categories of activation tuples, wherein contributions of data entries with activation tuples of size greater than a predetermined threshold to the loss function's value are limited and wherein contributions of data entries with activation tuples of size "0" to the loss function's value are minimized, to a predefined percentage; and selectively updating sets of weights associated with top-ranked latent features as evaluated by the magnitude of their contributions at the output layer in a training process.
Absstract of: US20260063424A1
Techniques for localizing a vehicle in real time using dynamic uncertainty estimates are presented. The techniques include obtaining a terrain image captured by the vehicle; passing the terrain image to a trained evidential deep learning neural network subsystem, from which a dynamic uncertainty value and a first feature vector are obtained in real time; for each of a plurality of candidate terrain locations, comparing the first feature vector to a respective second feature vector representative of a candidate terrain location, from which a respective similarity score is obtained; for at least one of the plurality of candidate terrain locations, updating in real time, by a recursive Bayesian estimator, a respective location weight based on the dynamic uncertainty value and the respective similarity score; estimating, in real time, a location of the vehicle based on the plurality of location weights; and providing the location of the vehicle.
Absstract of: US20260057234A1
A method and a device for training a graph neural network are provided. The method may be performed by a graphics processing unit (GPU), and may include determining at least one batch of training data; transmitting batch information corresponding to the determined at least one batch to at least one memory expansion device, so that the at least one memory expansion device acquires feature data for one or more data blocks of the at least one batch based on the batch information, receiving the feature data from the at least one memory expansion device; and training the graph neural network based on the feature data.
Absstract of: US20260056983A1
A method and system for providing an intelligent response agent based on a sophisticated reasoning and speculation function can generate and provide response data for queries related to specialized documents using a deep-learning neural network that implements a stepwise process for a sophisticated reasoning and speculation function.
Nº publicación: US20260057685A1 26/02/2026
Applicant:
DEEPMIND TECH LIMITED [GB]
DEEPMIND TECHNOLOGIES LIMITED
Absstract of: US20260057685A1
Methods, systems, and computer readable storage media for performing operations comprising: obtaining a plurality of initial network inputs that have been classified as belonging to a corresponding ground truth class; processing each of the plurality of initial network inputs using a trained target neural network to generate a respective predicted network output for each initial network input, the respective predicted network output comprising a respective score for each of a plurality of classes, the plurality of classes comprising the ground truth class; identifying, based on the respective predicted network outputs and the ground truth class, a subset of the initial network inputs as having been misclassified by the trained target neural network; and determining, based on the subset of initial network inputs, one or more failure case latent representations, wherein each failure case latent representation is a latent representation that characterizes network inputs that belong to the ground truth class but that are likely to be misclassified by the trained target neural network.