Resumen de: US20260094027A1
An electronic device for executing a neural network model including a non-linear operation and an operation method thereof are provided. The operation method of the electronic device includes obtaining data to be inferred and obtaining an inference result of the data output from the neural network model as the data is input to the neural network model including a plurality of nodes, wherein, in an inference process, a first weight applied when a value of a first node among the plurality of nodes is transmitted to a second node may be updated based on a value of a first reference node, which is any one of the plurality of nodes.
Resumen de: US20260094429A1
Techniques related to poly-scale kernel-wise convolutional neural network layers are discussed. A poly-scale kernel-wise convolutional neural network layer is applied to an input volume to generate an output volume and include filters each having a number of filter kernels with the same sample rate and differing dilation rates optionally in a repeating pattern of dilation rate groups within each of filters with the pattern of dilation rate groups offset between the filters the poly-scale kernel-wise convolutional neural network layer.
Resumen de: EP4718233A1
Embodiments of the present disclosure relate to generating controller logic. Indication of a controller logic generation request associated with an asset identifier may be received. A prompt template set associated with a controller logic generation workflow may be identified based on the asset identifier. The prompt template of the prompt template set may comprise one or more instruction sets. The prompt template set may be input into a large language model comprising one or more transformer neural networks and configured to generate a controller logic configuration file for the asset identifier based on the prompt template set and intent classification associated with each prompt template. The controller logic configuration file may be received from the large language model. Performance of one or more prediction-based actions may be initiated based on the controller logic configuration file.
Resumen de: US20260088022A1
Systems and methods are disclosed for generating internal state representations of a neural network during processing and using the internal state representations for classification or search. In some embodiments, the internal state representations are generated from the output activation functions of a subset of nodes of the neural network. The internal state representations may be used for classification by training a classification model using internal state representations and corresponding classifications. The internal state representations may be used for search, by producing a search feature from an search input and comparing the search feature with one or more feature representations to find the feature representation with the highest degree of similarity.
Resumen de: US20260088021A1
Apparatuses, systems, and techniques to facilitate understanding of media content using neural networks to adjust playback speed and volume based on environmental and other factors. In at least one embodiment, playback of media content is slowed down or sped up if audio associated with said media content is difficult to understand based on background noise, accent, difficulty of material, as well as other factors that decrease understandability of media content.
Resumen de: US20260088023A1
A method for training hotword detection includes receiving a training input audio sequence including a sequence of input frames that define a hotword that initiates a wake-up process on a device. The method also includes feeding the training input audio sequence into an encoder and a decoder of a memorized neural network. Each of the encoder and the decoder of the memorized neural network include sequentially-stacked single value decomposition filter (SVDF) layers. The method further includes generating a logit at each of the encoder and the decoder based on the training input audio sequence. For each of the encoder and the decoder, the method includes smoothing each respective logit generated from the training input audio sequence, determining a max pooling loss from a probability distribution based on each respective logit, and optimizing the encoder and the decoder based on all max pooling losses associated with the training input audio sequence.
Resumen de: US20260086524A1
Embodiments of the present disclosure relate to generating controller logic. Indication of a controller logic generation request associated with an asset identifier may be received. A prompt template set associated with a controller logic generation workflow may be identified based on the asset identifier. The prompt template of the prompt template set may comprise one or more instruction sets. The prompt template set may be input into a large language model comprising one or more transformer neural networks and configured to generate a controller logic configuration file for the asset identifier based on the prompt template set and intent classification associated with each prompt template. The controller logic configuration file may be received from the large language model. Performance of one or more prediction-based actions may be initiated based on the controller logic configuration file.
Resumen de: US20260086912A1
The present disclosure relates to methods and systems for providing inferences using machine learning systems. The methods and systems receive a load forecast for processing requests by a machine learning model and split the machine learning model into a plurality machine learning model portions based on the load forecast. The methods and systems determine a batch size for the requests for the machine learning model portions. The methods and systems use one or more available resources to execute the plurality of machine learning model portions to process the requests and generate inferences for the requests.
Resumen de: US20260079456A1
A computer-implemented method includes receiving, at a neural network, input data indicating one or more tasks associated with production, wherein the neural network is integrated with cognitive architecture that includes an imaginal memory buffer, utilizing the input data indicating one or more tasks with one or more production rule sets associated with an expert decision, obtain goal data indicating the expert decision utilizing imaginal memory buffer, selecting, from the imaginal memory buffer, one or more sectors associated with goal data indicating the novice decision, goal data indicating the intermediate decision, and goal data indicating the expert decision to obtain data indicating decision-making results, and in response to meeting a convergence threshold utilizing the data indicating decision-making results, outputting a simulation associated with a recommendation indicating information associated with at least the input data indicating one or more tasks associated with production.
Resumen de: US20260080313A1
A system receives domain specific questions from users and answers them. The system stores domain specific information comprising domain specific facts and domain specific programs. The system receives an input request to perform a domain specific task for the particular domain. The system provides the input request to a machine learning model trained to predict a score indicating whether the input request should be processed by a symbolic processor or by a neural network. If the score predicted by the machine learning model indicates that the input request should be processed by the symbolic processor, the system determines whether a stored domain specific program can solve the input request. If none of the stored domain specific programs can solve the input request, the system generates a new program for solving the input request using a machine learning based language model and the set of domain specific facts.
Resumen de: EP4711869A1
The present invention relates to a multi-task real-time inference scheduling system and real-time inference scheduling method of a machine tool, wherein a central control unit is connected to each of one or more individual control units through a network, receives a use context of each machine tool through each individual control unit, generates a multi-task learning model through a neural network, infers multiple tasks required to be performed by the individual control unit of each machine tool through machine learning by using real-time use contexts collected during operation of the machine tool by a use scenario, and schedules the multiple tasks of the machine tool through machine learning.
Resumen de: WO2026053443A1
This visualization device is configured to comprise: a function acquisition unit (1) which, when input data is given to an input layer, acquires a model function that is a function obtained by modeling the computation of a plurality of intermediate layers of a neural network that outputs an analysis result of the input data from an output layer; a contribution level calculation unit (2) that calculates the contribution level of each of the intermediate layers to the analysis result on the basis of the model function acquired by the function acquisition unit (1); and a map generation unit (3) that generates a visualization map of each of the intermediate layers on the basis of the contribution level calculated by the contribution level calculation unit (2). When the number of the plurality of intermediate layers is M (M is an integer of 2 or more), the contribution level calculation unit (2) repeatedly performs an update process for updating the value of the m-th (m=1, ..., M) intermediate layer from the input layer side on the basis of the model function, and calculates the contribution level of the m-th intermediate layer on the basis of the step interval between the step of the s-th (s is an integer of 1 or more) update process and the step of the (s+1)-th update process, and a gradient related to the value of the m-th intermediate layer after the s-th update process.
Resumen de: US20260071878A1
In various examples, live perception from sensors of a vehicle may be leveraged to generate potential paths for the vehicle to navigate an intersection in real-time or near real-time. For example, a deep neural network (DNN) may be trained to compute various outputs—such as heat maps corresponding to key points associated with the intersection, vector fields corresponding to directionality, heading, and offsets with respect to lanes, intensity maps corresponding to widths of lanes, and/or classifications corresponding to line segments of the intersection. The outputs may be decoded and/or otherwise post-processed to reconstruct an intersection—or key points corresponding thereto—and to determine proposed or potential paths for navigating the vehicle through the intersection.
Resumen de: WO2026053001A1
Systems and methods for image recognition using computer vision devices with a field- programming gate array (FPGA) and a convolutional neural network (CNN). A first CNN configuration is used for object class detection. Alternative CNN configurations are loaded for precise object classification and identification in real time under FPGA control.
Resumen de: US20260073182A1
Embodiments relate to streaming convolution operations in a neural processor circuit that includes a neural engine circuit and a neural task manager. The neural task manager obtains multiple task descriptors and multiple subtask descriptors. Each task descriptor identifies a respective set of the convolution operations of a respective layer of a set of layers. Each subtask descriptor identifies a corresponding task descriptor and a subset of the convolution operations on a portion of a layer of the set of layers identified by the corresponding task descriptor. The neural processor circuit configures the neural engine circuit for execution of the subset of the convolution operations using the corresponding task descriptor. The neural engine circuit performs the subset of the convolution operations to generate output data that correspond to input data of another subset of the convolution operations identified by another subtask descriptor from the list of subtask descriptors.
Resumen de: WO2026055446A1
Systems, methods, and computer-readable storage media for generating tests offering improved screening of subjects. A system can receive historical data associated with multiple variables and identify a cutoff value for each variable. The system can also generate a neural network using the historical data. The system can then generate, using the cutoff values and the neural network, one or more combination tests using multiple variables and the cutoff values for those variables. When a new sample is received, the system can then execute the one or more combination tests using the new sample's data, resulting in a prediction regarding the new sample. The one or more combination tests can then be tested and combined to form an ensemble test.
Resumen de: EP4708115A1
Provided are a computing device and method for assigning a generator to a semiconductor layout and a method of training a neural network. The former method includes an input operation of receiving a layout by a computing device, a division operation of dividing the layout into a plurality of channels, a conversion operation of converting each of the divided channels into a matrix, and an inference operation of inferring a generator to be assigned to the layout from the matrix. The inference operation is performed by displaying one or more generator candidates corresponding to the received layout.
Nº publicación: EP4707735A1 11/03/2026
Solicitante:
BOEING CO [US]
The Boeing Company
Resumen de: EP4707735A1
Techniques for localizing a vehicle in real time using dynamic uncertainty estimates are presented. The techniques include obtaining a terrain image captured by the vehicle; passing the terrain image to a trained evidential deep learning neural network subsystem, from which a dynamic uncertainty value and a first feature vector are obtained in real time; for each of a plurality of candidate terrain locations, comparing the first feature vector to a respective second feature vector representative of a candidate terrain location, from which a respective similarity score is obtained; for at least one of the plurality of candidate terrain locations, updating in real time, by a recursive Bayesian estimator, a respective location weight based on the dynamic uncertainty value and the respective similarity score; estimating, in real time, a location of the vehicle based on the plurality of location weights; and providing the location of the vehicle.