Resumen de: US2025252412A1
A method may include determining a combination of values of attributes represented by reference data associated with payment transaction by training a machine learning model based on an association between (i) respective values of the attributes and (ii) the payment transactions having a given result. The combination may be correlated with having the given result. The method may also include selecting a subset of the payment transactions that is associated with the combination of values. The method may additionally include determining a first rate at which payment transactions of the subset have the given result during a first time period and a second rate at which one or more payment transactions associated with the combination have the given result during a second time period, and generating an indication that the two rates differ.
Resumen de: US2025252341A1
A system includes a hardware processor configured to execute software code to receive interaction data identifying an action and personality profiles corresponding respectively to multiple participant cohorts in the action, generate, using the interaction data, an interaction graph of behaviors of the participant cohorts in the action, simulate, using a behavior model, participation of each of the participant cohorts in the action to provide a predicted interaction graph, and compare the predicted and generated interaction graphs to identify a similarity score for the predicted interaction graph relative to the generated interaction graph. When the similarity score satisfies a similarity criterion, the software code is executed to train, using the behavior model, an artificial intelligence character for interactions. When the similarity score fails to satisfy the similarity criterion, the software code is executed to modify the behavior model based on one or more differences between the predicted and generated interaction graphs.
Resumen de: US2025254189A1
Identifying Internet of Things (IoT) devices with packet flow behavior including by using machine learning models is disclosed. A set of training data associated with a plurality of IoT devices is received. The set of training data includes, for at least some of the exemplary IoT devices, a set of time series features for applications used by the IoT devices. A model is generated, using at least a portion of the received training data. The model is usable to classify a given device.
Resumen de: US2025252338A1
Certain aspects of the present disclosure provide techniques and apparatus for improved machine learning. In an example method, a current program state comprising a set of program instructions is accessed. A next program instruction is generated using a search operation, comprising generating a probability of the next program instruction based on processing the current program state and the next program instruction using a machine learning model, and generating a value of the next program instruction based on processing the current program state, the next program instruction, and a set of alternative outcomes using the machine learning model. An updated program state is generated based on adding the next program instruction to the set of program instructions.
Resumen de: US2025252112A1
A method and system for training a machine-learning algorithm (MLA) to rank digital documents at an online search platform. The method comprises training the MLA in a first phase for determining past user interactions of a given user with past digital documents based on a first set of training objects including the past digital documents generated by the online search platform in response to the given user having submitted thereto respective past queries. The method further comprises training the MLA in a second phase to determine respective likelihood values of the given user interacting with in-use digital documents based on a second set of training objects including only those past digital documents with which the given user has interacted and respective past queries associated therewith. The MLA may include a Transformer-based learning model, such as a BERT model.
Resumen de: GB2637695A
A combined hyperparameter and proxy model tuning method is described. The method involves iterations for hyperparameters search 102. In each search iteration, candidate hyperparameters are considered. An initial (‘seed’) hyperparameter is determined by initialization function 110, and used to train (104) one or more first proxy models on a target dataset 101. From the first proxy model(s), one or more first synthetic datasets are sampled using sampling function 108. A first evaluation model is fitted to each first synthetic dataset, for each candidate hyperparameter, by applying fit function 106 enabling each candidate hyperparameter from hyperparameter generator 112 to be scored. Based on the respective scores assigned to the candidate hyperparameters, a candidate hyperparameter is selected and used (103) to train one or more second proxy models on the target dataset. Hyperparameter search may be random, grid and Bayesian. Scores by scoring function 114 can be F1 scores. Uses include generative causal model with neural network architectures.
Resumen de: CN119949012A
An apparatus for wireless communication by a first wireless local area network (WLAN) device has a memory and one or more processors coupled to the memory. The processor is configured to transmit a first message indicating support of the first WLAN device for machine learning. The processor is also configured to receive a second message from a second WLAN device. The second message indicates support of the second WLAN device for one or more machine learning model types. The processor is configured to activate a machine learning session with the second WLAN device based at least in part on the second message. The processor is further configured to receive machine learning model structure information and machine learning model parameters from the second WLAN device during the machine learning session.
Resumen de: WO2024073382A1
Dynamic timers are determined using machine learning. The timers are used to control the amount of time that new data transaction requests wait before being processed by a data transaction processing system. The timers are adjusted based on changing conditions within the data transaction processing system. The dynamic timers may be determined using machine learning inference based on feature values calculated as a result of the changing conditions.
Resumen de: EP4597360A1
A system includes a hardware processor configured to execute software code to receive interaction data identifying an action and personality profiles corresponding respectively to multiple participant cohorts in the action, generate, using the interaction data, an interaction graph of behaviors of the participant cohorts in the action, simulate, using a behavior model, participation of each of the participant cohorts in the action to provide a predicted interaction graph, and compare the predicted and generated interaction graphs to identify a similarity score for the predicted interaction graph relative to the generated interaction graph. When the similarity score satisfies a similarity criterion, the software code is executed to train, using the behavior model, an artificial intelligence character for interactions. When the similarity score fails to satisfy the similarity criterion, the software code is executed to modify the behavior model based on one or more differences between the predicted and generated interaction graphs.
Resumen de: GB2637669A
Performing predictive inferences on a first natural language document having first sentences 611 and a second natural language document having second sentences 612, wherein, for each sentences from the first and second sentences a sentence embedding is generated using a sentence embedding machine learning model 601, the embedding model being generated by updating parameters of an initial embedding model, preferably pretrained, based on a similarity determination model error measure, for each sentence pair comprising first and second sentences determining, using the similarity determination machine learning model 602 and sentence embedding for each sentence 621, 622, an inferred similarity measure 631, for each similarity measure a predictive output is generated and prediction-based actions are performed based on the output. The predictive output may comprise generating a cross-document relationship graph with nodes and edges representing relationships between sentences. The similarity determination model error measure can be based on a deviation measure and a ground-truth similarity measure for a training sentence pair. The first document data object is preferably a user-provided query and the predictive output a search result.
Resumen de: EP4597929A1
A computer-implemented method is provided for training a machine learning model to identify one or more network events associated with a network and representing a network security threat. The method comprises: a) obtaining a first dataset comprising data representative of a plurality of network events in a first network; b) obtaining a second dataset comprising data representative of a plurality of network events in a second network; c) performing covariate shift analysis on the first dataset and the second dataset to identify and classify a plurality of differences between the first dataset and the second dataset; d) performing domain adaptation on the first dataset, based on a classified difference, to generate a training dataset; e) training a machine learning model using the training dataset to produce a trained threat detection model. In this way,
Resumen de: MX2025004899A
Disclosed are systems and methods for rapidly generating general reaction conditions using a closed-loop workflow leveraging matrix down-selection, machine learning, and robotic experimentation. In certain aspects, provided is a method, comprising: selecting a reaction pair comprising a first molecule and a second molecule; wherein the first molecule is selected from a first matrix and the second molecule is selected from a second matrix; selecting one or more reaction conditions for the reaction pair, the selection based on historic use of the one or more reaction conditions and a structural and functional diversity of the selected reaction pair; automatically performing, by a robotic system, an initial round of reactions between the selected reaction pair under the selected one or more reaction conditions.
Resumen de: WO2025159851A1
A method for rules-based modeling may include capturing a plurality of historical transaction data of a client account. The method may further include extracting a plurality of item level features from the plurality of historical transaction data. The method may further include providing the plurality of item level features to a predictive machine-learning model. The predictive machine-learning model may be trained to identify patterns within the plurality of item level features and generate a projected balance for the client account based on the identified patterns. The method may further include transmitting the projected balance to a user interface.
Resumen de: WO2025159880A1
A method, a system, and a computer program product for generation of document rules. A structural arrangement of one or more portions of each electronic document in a plurality of electronic documents is determined using one or more machine learning models. One or more parameters associated with each electronic document in the plurality of electronic documents are identified. One or more document generation rules are generated based on one or more parameters and the structural arrangement of one or more portions. One or more document generation rules are generated for each type of electronic document in the plurality of electronic documents. One or more document generation rules are stored in a storage location.
Resumen de: WO2025157774A1
Computer-implemented methods of providing a clinical predictor tool are described, comprising: obtaining training data comprising, for each of a plurality of patients, values for a plurality of clinical variables comprising a variable indicative of a diagnosis or prognosis and one or more further clinical variables; and training a clinical predictor model to predict the variable indicative of a diagnosis or prognosis using said training data, wherein obtaining the training data comprises obtaining synthetic clinical data comprising values for a plurality of clinical variables for one or more patients by obtaining a directed acyclic graph (DAG) edges corresponding to conditional dependence relationships inferred from real clinical data comprising values for the plurality of clinical variables for a plurality of patients, and obtaining values for each node of the DAG using a machine learning model and multivariate conditional probability table. Computer-implemented methods of obtaining synthetic clinical data are also described.
Resumen de: WO2025159758A1
Certain aspects of the disclosure provide a method for carbon capture, utilization, and storage (CCUS) process simulation. The method generally includes processing, with a first sub-model of a machine learning (ML) model, input features to generate a first thermodynamic process parameter for a first thermodynamic component, wherein the input features comprise one or more thermodynamic properties; processing, with a second sub-model of the ML model, the input features to generate a second thermodynamic process parameter predicted for the first thermodynamic component; selecting, by the ML model, a final thermodynamic process parameter for the first thermodynamic component as: the first thermodynamic process parameter when the first thermodynamic process parameter is greater than a first threshold; or the second thermodynamic process parameter when the first thermodynamic process parameter is less than the first threshold; and providing as a first output from the ML model the selected final thermodynamic process parameter.
Resumen de: US2025245050A1
A computer system includes a transceiver that receives over a data communications network different types of input data from multiple source nodes and a processing system that defines for each of multiple data categories, a set of groups of data objects for the data category based on the different types of input data. Predictive machine learning model(s) predict a selection score for each group of data objects in the set of groups of data objects for the data category for a predetermined time period. Control machine learning model(s) determine how many data objects are permitted for each group of data objects based on the selection score. Decision-making machine learning model(s) prioritize the permitted data objects based on one or more predetermined priority criteria. Subsequent activities of the computer system are monitored to calculate performance metrics for each group of data objects and for data objects actually selected during the predetermined time period. Predictive machine learning model(s) and decision-making machine learning model(s) are adjusted based on the performance metrics to improve respective performance(s).
Resumen de: US2025245533A1
The present disclosure provides systems and methods for on-device machine learning. In particular, the present disclosure is directed to an on-device machine learning platform and associated techniques that enable on-device prediction, training, example collection, and/or other machine learning tasks or functionality. The on-device machine learning platform can include a context provider that securely injects context features into collected training examples and/or client-provided input data used to generate predictions/inferences. Thus, the on-device machine learning platform can enable centralized training example collection, model training, and usage of machine-learned models as a service to applications or other clients.
Resumen de: US2025245532A1
In some embodiments, a computer-implemented method for predicting agronomic field property data for one or more agronomic fields using a trained machine learning model is disclosed. The method comprises receiving, at an agricultural intelligence computer system, agronomic training data; training a machine learning model, at the agricultural intelligence computer system, using the agronomic training data; in response to receiving a request from a client computing device for agronomic field property data for one or more agronomic fields, automatically predicting the agronomic field property data for the one or more agronomic fields using the machine learning model configured to predict agronomic field property data; based on the agronomic field property data, automatically generating a first graphical representation; and causing to display the first graphical representation on the client computing device.
Resumen de: US2025245530A1
Certain aspects of the present disclosure provide techniques and apparatus for generating a response to a query input in a generative artificial intelligence model using variable draft length. An example method generally includes determining (e.g., measuring or accessing) one or more operational properties of a device on which inferencing operations using a machine learning model are performed. A first draft set of tokens is generated using the machine learning model. A number of tokens included in the first draft set of tokens is based on the one or more operational properties of the device and a defined scheduling function for the machine learning model. The first draft set of tokens are output for verification.
Resumen de: US2025245553A1
Aspects of the disclosure are directed to improving load balancing for serving mixture of experts (MoE) machine learning models. Load balancing is improved by providing memory dies increased access to computing dies through a 2.5D configuration and/or an optical configuration. Load balancing is further improved through a synchronization mechanism that determines an optical split of batches of data across the computing die based on a received MoE request to process the batches of data. The 2.5D configuration and/or optical configuration as well as the synchronization mechanism can improve usage of the computing die and reduce the amount of memory dies required to serve the MoE models, resulting in less consumption of power and lower latencies and complexity in alignment associated with remotely accessing memory.
Resumen de: WO2025156057A1
Systems and methods for fine-tuning a machine learning model for use in patent analytics and infringement system are disclosed herein. The method involves providing, in a memory, a litigation dataset comprising a plurality of historical patent litigation records, each patent litigation record comprising at least one patent claim, and a litigation outcome corresponding to each of the at least one patent claim; receiving, at a processor in communication with the memory, a product/service content item associated with a candidate patent litigation record in the plurality of historical patent litigation records; updating, at the processor, the candidate patent litigation record based on the product/service content item; and generating, at the processor, a vector database based on the litigation dataset, the vector database used to fine-tune a machine learning model. Systems and methods for identifying products and services relevant to a patent claim and for ranking results are also described.
Resumen de: WO2025159979A1
A method for establishing and generating contextual links between data from a plurality of data sources is described. The method includes receiving data and decomposing the received data into a decomposed data set; parsing and analyzing the decomposed data set based on a set of attribute analyzers to associate one or more attributes to the decomposed data set; determining an intent of data from the decomposed data set; generating a semantic graph of the decomposed data set based on the intent of data to evaluate data relatability between the decomposed data set; generating atomic knowledge units (AMUs) based on the parsed decomposed data set and the semantic graph; analyzing the AMUs corresponding to the received data by applying trained machine learning models to generate links between the AMUs and processing the generated links by a model ensemble to establish contextual links between data.
Resumen de: US2024104370A1
A method comprising: sampling a first causal graph from a first graph distribution modelling causation between variables in a feature vector, and sampling a second causal graph from a second graph distribution modelling presence of possible confounders, a confounder being an unobserved cause of both of two variables. The method further comprises: identifying a parent variable which is a cause of a selected variable according to the first causal graph, and which together with the selected variable forms a confounded pair having a respective confounder being a cause of both according to the second causal graph. A machine learning model encodes the parent to give a first embedding, and encodes information on the confounded pair give a second embedding. The embeddings are combined and then decoded to give a reconstructed value. This mechanism may be used in training the model or in treatment effect estimation.
Nº publicación: EP4591217A1 30/07/2025
Solicitante:
MICROSOFT TECHNOLOGY LICENSING LLC [US]
Microsoft Technology Licensing, LLC
Resumen de: US2024104338A1
A method comprising: sampling a temporal causal graph from a temporal graph distribution specifying probabilities of directed causal edges between different variables of a feature vector at a present time step, and from one variable at a preceding time step to another variables at the present time step. Based on this there are identified: a present parent which is a cause of the selected variable in the present time step, and a preceding parent which is a cause of the selected variable from the preceding time step. The method then comprises: inputting a value of each identified present and preceding parent into a respective encoder, resulting in a respective embedding of each of the present and preceding parents; combining the embeddings of the present and preceding parents, resulting in a combined embedding; inputting the combined embedding into a decoder, resulting in a reconstructed value of the selected variable.