Absstract of: US20260057486A1
The present disclosure provides an apparatus and method of guided neural network model for image processing. An apparatus may comprise a guidance map generator, a synthesis network and an accelerator. The guidance map generator may receive a first image as a content image and a second image as a style image, and generate a first plurality of guidance maps and a second plurality of guidance maps, respectively from the first image and the second image. The synthesis network may synthesize the first plurality of guidance maps and the second plurality of guidance maps to determine guidance information. The accelerator may generate an output image by applying the style of the second image to the first image based on the guidance information.
Absstract of: US20260057282A1
Apparatus for a dynamic development operations (“DevOps”) pipeline generation may include the current build and release of the source code files and related changed units within a data repository. The data repository may include a plurality of data associated with the current build and release of the source code file. A software system may collect the data from the repository. The collected data may be input into an artificial intelligence or machine learning (“AI/ML”) module. The AI/ML module may use a language learning module (“LLM”) to create a plurality of nodes from the data. The LLM may create a knowledge graph from the nodes. The knowledge graph may be input into a quantum computing system to create attention matrices. The attention matrices may be input into a quantum annealing system to determine the DevOps plan. A transformer neural network (“TNN”) may output and execute the DevOps plan.
Nº publicación: US20260057413A1 26/02/2026
Applicant:
VOICEMONK INC [US]
Voicemonk, Inc
Absstract of: US20260057413A1
A computer-implemented method for usage recall on a user device is disclosed. The method includes capturing, at intervals, visual content presented on a display of the user device; analyzing the captured visual content by dividing the content into pixels and processing the pixels with a neural network trained to recognize visual elements and their positions to produce an element map; storing, in a local context store, entries associating the captured content with a time, an application or window identifier, and descriptors of the recognized elements; receiving a natural-language user query describing a past activity; retrieving, from the local context store, an entry responsive to the user query; and re-establishing at least part of a prior application state by generating user-interface events directed to a target visual element from the element map. The neural network may comprise deep or recurrent layers with LSTM and attention mechanisms optimized by reinforcement learning.