Resumen de: WO2025165604A1
The method may include inputting a first set of data into a first model; for each user in the second group, generating a first similarity score; generating a relevance score for each parameter; determining a subset of parameters based on relevance; inputting the subset of parameters, a second set of data, and a third set of data into a second model; generating a space-partitioning data structure based on the second set of data; for each user in the first group, determining a feature distance between a representation of the user in the first group and a representation of a user in the second group based on the third set of data and the space-partitioning data structure; for each user in the second group, generating a second similarity score; and for each user in the second group, generating an overall similarity score.
Resumen de: WO2025165854A1
A method can include receiving by a computational device at a wellsite, real-time, time series data from a pump system operating at the wellsite, where the wellsite includes a wellbore in contact with a fluid reservoir; using the computational device, processing a portion of the time series data to generate feature values as input to a trained machine learning model to detect pump system behavior indicative of a forthcoming performance issue of the pump system; and issuing a signal responsive to detection of the pump system behavior to mitigate the forthcoming performance issue of the pump system.
Resumen de: WO2025166095A1
Methods, systems, and computer program products are provided for determining feature importance using Shapley values associated with a machine learning model. An example method includes training a classification machine learning model, performing a plurality of feature ablation procedures on the classification machine learning model using a plurality of features to provide a distribution of feature ablation outcomes, training an explainer neural network machine learning model based on the distribution of the feature ablation outcomes to provide a trained explainer neural network machine learning model, wherein the explainer neural network machine learning model is configured to provide an output that comprises a prediction of a Shapley value associated with a feature, and determining one or more Shapley values of an input feature using the explainer neural network machine learning model.
Resumen de: WO2025165133A1
The disclosure relates to a 5G or 6G communication system for supporting a higher data transmission rate. The present disclosure provides a method and system for discovering an Artificial Intelligence/ Machine Learning (AI/ML) model for transfer learning. receiving, from a second entity, a first request message for requesting to store machine learning (ML) model information; verifying whether the second entity is authorized to store the ML model, wherein the ML model is identified by the analytics ID included in the first request message; storing the ML model information, based on the request message and a result of the verification for the second entity; and transmitting, to the second entity, a first response message, as a response of the request message, for indicating an identifier for the ML model.
Resumen de: EP4597317A1
Disclosed are an optimization method for distributed execution of a deep learning task and a distributed system. The method includes that: a computation graph is generated based on a deep learning task and hardware resources are allocated for the distributed execution of the deep learning task; the allocated hardware resources are grouped to obtain at least one grouping scheme; for each grouping scheme, tensor information related to multiple operators contained in the computation graph is split based on the value of at least one factor under this grouping scheme to obtain multiple candidate splitting solutions; and an optimal efficiency solution for executing the deep learning task of the hardware resources is selected by using a cost model. Through operator splitting based on device grouping combined with optimization solving based on the cost model, automatic optimization of distributed execution for various deep learning tasks is realized. Furthermore, computation graph partitioning based on grouping can be introduced, and the solving space can be restricted according to different levels of optimization, thereby generating a distributed execution solution of required optimization level within controllable time.
Resumen de: EP4596425A1
The present disclosure provides techniques for dynamic utilization of aircraft based on environmental conditions. A proposed flight plan for an aircraft is received. Environment data representing a set of environmental conditions at a source airport indicated in the proposed flight plan is collected. Weather data representing a set of environmental conditions at a destination airport indicated in the proposed flight plan is collected. Operation data related to the aircraft indicated in the proposed flight plan is received. Aircraft engine degradation of the aircraft is dynamically simulated based on the collected environment data and the received operation data using a trained machine learning (ML) model. The simulated aircraft engine degradation is output.
Resumen de: EP4597360A1
A system includes a hardware processor configured to execute software code to receive interaction data identifying an action and personality profiles corresponding respectively to multiple participant cohorts in the action, generate, using the interaction data, an interaction graph of behaviors of the participant cohorts in the action, simulate, using a behavior model, participation of each of the participant cohorts in the action to provide a predicted interaction graph, and compare the predicted and generated interaction graphs to identify a similarity score for the predicted interaction graph relative to the generated interaction graph. When the similarity score satisfies a similarity criterion, the software code is executed to train, using the behavior model, an artificial intelligence character for interactions. When the similarity score fails to satisfy the similarity criterion, the software code is executed to modify the behavior model based on one or more differences between the predicted and generated interaction graphs.
Resumen de: GB2637669A
Performing predictive inferences on a first natural language document having first sentences 611 and a second natural language document having second sentences 612, wherein, for each sentences from the first and second sentences a sentence embedding is generated using a sentence embedding machine learning model 601, the embedding model being generated by updating parameters of an initial embedding model, preferably pretrained, based on a similarity determination model error measure, for each sentence pair comprising first and second sentences determining, using the similarity determination machine learning model 602 and sentence embedding for each sentence 621, 622, an inferred similarity measure 631, for each similarity measure a predictive output is generated and prediction-based actions are performed based on the output. The predictive output may comprise generating a cross-document relationship graph with nodes and edges representing relationships between sentences. The similarity determination model error measure can be based on a deviation measure and a ground-truth similarity measure for a training sentence pair. The first document data object is preferably a user-provided query and the predictive output a search result.
Resumen de: EP4598100A1
A method performed by a device supporting artificial intelligence/machine learning (AI/ML) in a wireless communication system, according to at least one of embodiments disclosed in the present specification, comprises: receiving a configuration for an AI/ML model from a network; performing monitoring on performance of the AI/ML model on the basis of outputs from the AI/ML model; and performing AI/ML model management of maintaining the AI/ML model or at least partially changing the AI/ML model on the basis of the monitoring of the performance of the AI/ML model, wherein the monitoring of the performance of the AI/ML model may comprise first monitoring for monitoring performance of one or two or more intermediate outputs obtained before a final output from the AI/ML model, and second monitoring for monitoring performance of the final output obtained on the basis of the one or two or more intermediate outputs.
Resumen de: EP4597929A1
A computer-implemented method is provided for training a machine learning model to identify one or more network events associated with a network and representing a network security threat. The method comprises: a) obtaining a first dataset comprising data representative of a plurality of network events in a first network; b) obtaining a second dataset comprising data representative of a plurality of network events in a second network; c) performing covariate shift analysis on the first dataset and the second dataset to identify and classify a plurality of differences between the first dataset and the second dataset; d) performing domain adaptation on the first dataset, based on a classified difference, to generate a training dataset; e) training a machine learning model using the training dataset to produce a trained threat detection model. In this way,
Resumen de: GB2637695A
A combined hyperparameter and proxy model tuning method is described. The method involves iterations for hyperparameters search 102. In each search iteration, candidate hyperparameters are considered. An initial (‘seed’) hyperparameter is determined by initialization function 110, and used to train (104) one or more first proxy models on a target dataset 101. From the first proxy model(s), one or more first synthetic datasets are sampled using sampling function 108. A first evaluation model is fitted to each first synthetic dataset, for each candidate hyperparameter, by applying fit function 106 enabling each candidate hyperparameter from hyperparameter generator 112 to be scored. Based on the respective scores assigned to the candidate hyperparameters, a candidate hyperparameter is selected and used (103) to train one or more second proxy models on the target dataset. Hyperparameter search may be random, grid and Bayesian. Scores by scoring function 114 can be F1 scores. Uses include generative causal model with neural network architectures.
Resumen de: WO2024073382A1
Dynamic timers are determined using machine learning. The timers are used to control the amount of time that new data transaction requests wait before being processed by a data transaction processing system. The timers are adjusted based on changing conditions within the data transaction processing system. The dynamic timers may be determined using machine learning inference based on feature values calculated as a result of the changing conditions.
Nº publicación: EP4595692A1 06/08/2025
Solicitante:
QUALCOMM INC [US]
QUALCOMM INCORPORATED
Resumen de: CN119949012A
An apparatus for wireless communication by a first wireless local area network (WLAN) device has a memory and one or more processors coupled to the memory. The processor is configured to transmit a first message indicating support of the first WLAN device for machine learning. The processor is also configured to receive a second message from a second WLAN device. The second message indicates support of the second WLAN device for one or more machine learning model types. The processor is configured to activate a machine learning session with the second WLAN device based at least in part on the second message. The processor is further configured to receive machine learning model structure information and machine learning model parameters from the second WLAN device during the machine learning session.