Resumen de: US2025259727A1
Disclosed is a meal detection and meal size estimation machine learning technology. In some embodiments, the techniques entail applying to a trained multioutput neural network model a set of input features, the set of input features representing glucoregulatory management data, insulin on board, and time of day, the trained multioutput neural network model representing multiple fully connected layers and an output layer formed from first and second branches, the first branch providing a meal detection output and the second branch providing a carbohydrate estimation output; receiving from the meal detection output a meal detection indication; and receiving from the carbohydrate estimation output a meal size estimation.
Resumen de: US2025259735A1
Systems and methods for preprocessing input images in accordance with embodiments of the invention are disclosed. One embodiment includes a method for performing inference based on input data, the method includes receiving a set of real-valued input images and preprocessing the set of real-valued input images by applying a virtual optical dispersion to the set of real-valued input images to produce a set of real-valued output images. The method further includes predicting, using a machine learning model, an output based on the set of real-valued output images, computing a loss based on the predicted output and a true output, and updating the machine learning model based on the loss.
Resumen de: US2025258917A1
Apparatuses, systems, and techniques for classifying a candidate uniform resource locator (URL) as a malicious URL using a machine learning (ML) detection system. An integrated circuit is coupled to physical memory of a host device via a host interface. The integrated circuit hosts a hardware-accelerated security service that obtains a snapshot of data stored in the physical memory and extracts a set of features from the snapshot. The security service classifies the candidate URL as a malicious URL using the set of features and outputs an indication of the malicious URL.
Resumen de: US2025258990A1
A method includes: training a machine learning model with a plurality of electronic circuit placement layouts; predicting, by the machine learning model, fix rates of design rule check (DRC) violations of a new electronic circuit placement layout; identifying hard-to-fix (HTF) DRC violations among the DRC violations based on the fix rates of the DRC violations of the new electronic circuit placement layout; and fixing, by an engineering change order (ECO) tool, the DRC violations.
Resumen de: WO2025171236A1
Certain aspects of the disclosure provide systems and methods for diagnosis and treatment of suicidal thought and behavior (STB) through reward-aversion judgment and contextual variables. Methods include generating a set of STB parameters associated with a subject, the set of STB parameters based on reward-aversion judgment variables and contextual variables and processing the set of STB parameters with a machine learning model to generate an STB prediction. The subject may then be treated based on the STB prediction.
Resumen de: WO2025171357A1
A distributed generative artificial intelligence (AT) reasoning and action platform that utilizes a cloud-based computing architecture for neuro-symbolic reasoning. The platform comprises systems for distributed computation, curation, marketplace integration, and context management. A distributed computational graph (DCG) orchestrates complex workflows for building and deploying generative Al models, incorporating expert judgment and external data sources. A context computing system aggregates contextual data, while a curation system provides curated responses from trained models. Marketplaces offer data, algorithms, and expert judgment for purchase or integration. The platform enables enterprises to construct user-defined workflows and incorporate trained models into their business processes, leveraging enterprise-specific knowledge. The platform facilitates flexible and scalable integration of machine learning models into software applications, supported by a dynamic and adaptive DCG architecture.
Resumen de: WO2025170921A1
Disclosed are systems and methods for identifying a target molecule by receiving a molecule dataset from a library of molecules comprising chemical-structural representations of candidate molecules for a material application; determining one or more target molecular property values derived from a set of predictive molecular descriptor values determined from a candidate molecule of the molecular dataset applied to a trained machine learning model, wherein the trained ML model is configured to output the set of predictive molecular descriptor values for a given molecule data input, and wherein the trained ML model was trained for a set of training data and the set of molecular descriptors generated from a molecular descriptor modeling application; and outputting, via a report or to a data store, the one or more target molecular property values for each of the molecular dataset.
Resumen de: WO2025170089A1
According to various embodiments of the present disclosure, an operation method for a first node in a wireless communication system is provided, the method comprising the steps of: receiving at least one synchronization signal from a second node; receiving control information from the second node; transmitting first communication environment data to the second node; receiving, from the second node, model information related to a first secondary artificial intelligence/machine learning (AI/ML) model based on a first sub-feature set related to the first communication environment data; transmitting, to the second node, second communication environment data changed from the first communication environment data; and receiving, from the second node, model update information for a second secondary AI/ML model, which is based on a second sub-feature set related to the second communication environment data and is changed from the first secondary AI/ML model.
Resumen de: WO2025166404A1
This disclosure relates generally to detecting artificial intelligence (AI) implementation in a software application comprising one or more application packages (APs). One or more processors extract one or more AP strings from the software application, which each represent an AP; and create a prompt for a machine learning model, trained to generate output text, comprising the one or more AP strings, the prompt representing instructions to provide a classification and provide functionality information of each of the one or more APs, the classification being AI relevant or non-AI relevant and the functionality information describing a functionality of the respective AP. The one or more processors then evaluate the machine learning model on the prompt to generate output text corresponding to the classification and the functionality information of each of the one or more APs; and generate a report of the AI implementation based on the output text.
Resumen de: US2025259080A1
Automated computer systems and methods to determine a sentiment of information in digital information or content are disclosed. One aspect includes deriving, by a processor, the digital information from a source; generating, by the processor, a domain-specific machine learning sentiment score, based on the digital information, by one model of at least two machine learning models; autonomously mapping, by the processor, a non-domain specific knowledge graph of associations between elements in a set of digital contextual information; receiving, by the processor, sentiment graphs, each sentiment graph defining a sentiment; generating, by the processor, a graph sentiment score based on the non-domain specific knowledge graph and the sentiment graphs; generating, by the processor, a final sentiment score based on the graph sentiment score and the domain-specific machine learning sentiment score; and determining the sentiment of the information in the digital information or content via the final sentiment score.
Resumen de: US2025259083A1
Systems and techniques that facilitate data diversity visualization and/or quantification for machine learning models are provided. In various embodiments, a processor can access a first dataset and a second dataset, where a machine learning (ML) model is trained on the first dataset. In various instances, the processor can obtain a first set of latent activations generated by the ML model based on the first dataset, and a second set of latent activations generated by the ML model based on the second dataset. In various aspects, the processor can generate a first set of compressed data points based on the first set of latent activations, and a second set of compressed data points based on the second set of latent activations, via dimensionality reduction. In various instances, a diversity component can compute a diversity score based on the first set of compressed data points and second set of compressed data points.
Resumen de: US2025259103A1
The present disclosure describes a patent management system and method for remediating insufficiency of input data for a machine learning system. A prediction to be performed is received from a user input. Relevant input data is determined to perform the prediction. The relevant input data is determined by applying filters based on the prediction to be performed. Prediction is performed by generating a plurality of predicted vectors. A confidence score for the generated plurality of predicted vectors is determined. If the confidence score is less than a predetermined threshold, the prediction is unreliable. The input data is expanded by gathering additional input data. The input data is expanded with the additional input data until the confidence score exceeds the predetermined threshold. A predicted output is generated with the expanded input data. The prediction output and the confidence score are provided for rendering.
Resumen de: US2025259114A1
In general, in one aspect, embodiments relate to a method of producing a sustainable pipeline of pozzolanic materials that includes gathering unstructured and/or structured data publicly available on a network, identifying analytical data of a pozzolanic material using one or more machine learning models, where the analytical data is present within at least the structured data, extracting the analytical data from the structured data, predicting, using one or more predictive models, one or more performance characteristics of the pozzolanic material based at least in part on the analytical data, to form one or more predicted performance characteristics, comparing the predicted one or more performance characteristics to one or more minimum acceptable performance characteristics, storing the extracted analytical data and the one or more predicted performance characteristics in a database if the one or more performance characteristics meets or exceeds the minimum acceptable performance characteristic, and preparing a cement composition that includes the pozzolanic material if the predicted one or more performance characteristics meets or exceeds the one or more minimum acceptable performance characteristics.
Resumen de: US2025259078A1
Disclosed embodiments may provide techniques for extracting hypothetical statements from unstructured data. A computer-implemented method can include accessing input data that includes unstructured data. The computer-implemented method can also include processing the input data using a statement-extraction machine-learning model to generate a plurality of candidate hypothetical statements and summary data associated with the input data. The computer-implemented method can also include constructing one or more filtering prompts for filtering the plurality of candidate hypothetical statements. The computer-implemented method can also include processing the one or more filtering prompts and the plurality of candidate hypothetical statements using the statement-extraction machine-learning model to identify one or more hypothetical statements. In some instances, the one or more hypothetical statements correspond to one or more non-factual assertions associated with the unstructured data. The computer-implemented method can also include transmitting the summary data of the input data and the one or more hypothetical statements.
Resumen de: US2025258969A1
The present disclosure relates to systems and methods for manufacturing a battery electrode plate. The system comprises a computing device configured to receive, from the client device, a target process factor among a plurality of process factors associated with manufacturing a battery electrode plate, predict, via a machine-learning model, a change in a characteristic of the battery electrode plate based on a change in a design value of the target process factor, generate information for selecting the target process factor based on predicting the change of the characteristic of the battery electrode plate, and transmit the information to the client device for manufacturing the battery electrode plate.
Resumen de: US2025259077A1
Methods and systems are provided herein for generating optimized, hybrid machine learning models capable of performing tasks such as classification and inference in IoT environments. The models may be deployed as optimized, task-specific (and/or environment-specific) hardware components (e.g., custom chips to perform the machine learning tasks) or lightweight applications that can operate on resource constrained devices. The hybrid models may comprise hybridization modules that integrate output of one or more machine learning models, according to sets of hyperparameters that are refined according to the task and/or environment/sensor data that will be used by the IoT device.
Resumen de: US2025252338A1
Certain aspects of the present disclosure provide techniques and apparatus for improved machine learning. In an example method, a current program state comprising a set of program instructions is accessed. A next program instruction is generated using a search operation, comprising generating a probability of the next program instruction based on processing the current program state and the next program instruction using a machine learning model, and generating a value of the next program instruction based on processing the current program state, the next program instruction, and a set of alternative outcomes using the machine learning model. An updated program state is generated based on adding the next program instruction to the set of program instructions.
Resumen de: US2025252341A1
A system includes a hardware processor configured to execute software code to receive interaction data identifying an action and personality profiles corresponding respectively to multiple participant cohorts in the action, generate, using the interaction data, an interaction graph of behaviors of the participant cohorts in the action, simulate, using a behavior model, participation of each of the participant cohorts in the action to provide a predicted interaction graph, and compare the predicted and generated interaction graphs to identify a similarity score for the predicted interaction graph relative to the generated interaction graph. When the similarity score satisfies a similarity criterion, the software code is executed to train, using the behavior model, an artificial intelligence character for interactions. When the similarity score fails to satisfy the similarity criterion, the software code is executed to modify the behavior model based on one or more differences between the predicted and generated interaction graphs.
Resumen de: US2025252412A1
A method may include determining a combination of values of attributes represented by reference data associated with payment transaction by training a machine learning model based on an association between (i) respective values of the attributes and (ii) the payment transactions having a given result. The combination may be correlated with having the given result. The method may also include selecting a subset of the payment transactions that is associated with the combination of values. The method may additionally include determining a first rate at which payment transactions of the subset have the given result during a first time period and a second rate at which one or more payment transactions associated with the combination have the given result during a second time period, and generating an indication that the two rates differ.
Resumen de: US2025252112A1
A method and system for training a machine-learning algorithm (MLA) to rank digital documents at an online search platform. The method comprises training the MLA in a first phase for determining past user interactions of a given user with past digital documents based on a first set of training objects including the past digital documents generated by the online search platform in response to the given user having submitted thereto respective past queries. The method further comprises training the MLA in a second phase to determine respective likelihood values of the given user interacting with in-use digital documents based on a second set of training objects including only those past digital documents with which the given user has interacted and respective past queries associated therewith. The MLA may include a Transformer-based learning model, such as a BERT model.
Resumen de: US2025254189A1
Identifying Internet of Things (IoT) devices with packet flow behavior including by using machine learning models is disclosed. A set of training data associated with a plurality of IoT devices is received. The set of training data includes, for at least some of the exemplary IoT devices, a set of time series features for applications used by the IoT devices. A model is generated, using at least a portion of the received training data. The model is usable to classify a given device.
Resumen de: WO2025163013A1
A computer-implemented method is provided for training a machine learning model to identify one or more network events associated with a network and representing a network security threat. The method comprises: a) obtaining a first dataset comprising data representative of a plurality of network events in a first network; b) obtaining a second dataset comprising data representative of a plurality of network events in a second network; c) performing covariate shift analysis on the first dataset and the second dataset to identify and classify a plurality of differences between the first dataset and the second dataset; d) performing domain adaptation on the first dataset, based on a classified difference, to generate a training dataset; e) training a machine learning model using the training dataset to produce a trained threat detection model.
Resumen de: WO2025160650A1
Described are various embodiments of system and method for monitoring and optimizing player engagement. In some embodiments, the computer-implemented method comprises generating, on a server, a storage layer in the form of a graph drawn according to a schema description of objects and relationships in a virtual game environment. The server produces, from the received schema and learning system objectives, one or more instructions. The instructions are transmitted to and applied by a gaming device configured to execute a designated interactive software program, to produce from raw data generated one or more embeddings. The embeddings are stored in the graph, and retrieved to perform one or more data analysis tasks on the designated embeddings by one or more machine learning algorithms. The embeddings can be augmented or optimized into contextualized preferences embeddings or contextualized timeline embeddings, to allow better contextual learning and predictive outputs.
Resumen de: WO2025163523A1
Method and servers for determining optimized device parameters of an electronic circuit. The method includes accessing design information of the electronic circuit, determining a set of device parameters of the electronic circuit based on the design information, receiving information indicative of set of performances-of-interest to be optimized, and a corresponding set of target values, each target value being associated with a corresponding performance-of- interest, defining a multi-objective reward function based on the set of performances-of-interest to be optimized and outputting, using a pre-built Machine Learning (ML) algorithm interacting with an electronic design automation (EDA) environment, an optimized device parameter value for each of the device parameters based on the multi-objective reward function.
Nº publicación: WO2025164720A1 07/08/2025
Solicitante:
NATIONAL INSTITUTE OF INFORMATION AND COMMUNICATIONS TECH [JP]
\u56FD\u7ACB\u7814\u7A76\u958B\u767A\u6CD5\u4EBA\u60C5\u5831\u901A\u4FE1\u7814\u7A76\u6A5F\u69CB
Resumen de: WO2025164720A1
A model generation device according to one aspect of the present disclosure acquires eyeball-related data measured from a subject who is viewing content, uses the acquired eyeball-related data to perform machine learning of an inference model, and outputs the results of the machine learning. The machine learning includes training the inference model to acquire, from the eyeball-related data, the ability to infer a semantic representation in an information space corresponding to content included in the viewed content. Thus, the present disclosure provides a technique for easily inferring, at low cost, content perceived by an individual while viewing content.