Resumen de: US20260058966A1
A method for behavior-based threat detection may include obtaining a first set of data corresponding to at least one of an employee or an enterprise associated with the employee. The method may include training a machine learning model for at least one of the employee or the enterprise associated with the employee by providing the first set of data to the machine learning model as training data, the machine learning model configured to identify deviations between behavioral traits of email communications and behavioral traits of the employee or the enterprise. The method may include receiving an email communication addressed to the employee. The method may include determining that the email communication represents a security risk by applying the machine learning model to the email communication. The method may include performing a remediation action on the email communication based on determining that the email communication represents a security risk.
Resumen de: US20260058949A1
A system and method for inferring an operating system version for a device based on communications security data. A method includes identifying a plurality of sequences in communications security data sent by the device; determining an operating system type of an operating system used by the device based on the identified plurality of sequences; applying a version-identifying model to the identified plurality of sequences, wherein the version-identifying model is a machine learning model trained to output a version identifier, wherein the applied version-identifying model is associated with the determined operating system type; and determining the operating system version of the device based on the output of the version-identifying model.
Resumen de: US20260056009A1
A virtual metrology apparatus, a virtual metrology method, and a virtual metrology program that allow a highly accurate virtual metrology process to be performed is provided. A virtual metrology apparatus includes an acquisition unit configured to acquire a time series data group measured in association with processing of a target object in a predetermined processing unit of a manufacturing process, and a training unit configured to train a plurality of network sections by machine learning such that a result of consolidating output data produced by the plurality of network sections processing the acquired time series data group approaches inspection data of a resultant object obtained upon processing the target object in the predetermined processing unit of the manufacturing process.
Resumen de: US20260056983A1
A method and system for providing an intelligent response agent based on a sophisticated reasoning and speculation function can generate and provide response data for queries related to specialized documents using a deep-learning neural network that implements a stepwise process for a sophisticated reasoning and speculation function.
Resumen de: EP4700603A1
Techniques are disclosed for a machine learning model, such as a large learning model (LLM), that incorporates a model of a chain of thought of a particular user when responding to a query from the user. In one example, a system generates a knowledge graph of a chain of thought of the user. The knowledge graph comprises nodes representing topics present within past queries by the user and edges representing a co-occurrence between the topics. The system determines, based on a topic present within a query from the user and the knowledge graph, a goal query comprising a goal topic. The system provides, to a machine learning model, the user to generate, by the machine learning model, a response. The machine learning model is constrained to include the goal topic of the goal query within the response. The system outputs, for display, the response to the query.
Resumen de: EP4700604A1
A system for preparing machine learning training data for use in evaluation of term definition quality. The system can include a server having at least one server processor and at least one server memory for storing a plurality of terms with corresponding definitions, and a plurality of client devices each having at least one client memory device and at least one client processor. The client processor programmed to receive at least one of the plurality of terms and its corresponding definition from the server, display the term and its corresponding definition, and receive an indication of whether the definition satisfies one or more definition quality guidelines. The server memory includes instructions for causing the at least one server processor to receive the indications from the plurality of client devices and label each definition as satisfying each of the definition quality guidelines or not based on the received indications.
Resumen de: EP4700652A1
The present application relates to the technical field of machine learning. Disclosed are a method and system for interpreting a sparse interaction effect modeled by a black-box artificial intelligence model. The method and system can automatically analyze an interactive distribution modeled by a model. The implementation of the method and system comprises the following steps: providing data that needs to be assessed; using a black-box model to perform prediction on the data, so as to obtain a prediction result of the model; on the basis of an output of the black-box model, modeling the interaction effect between input units of samples, calculating the interaction intensity between combinations formed by the input units, and expressing the black-box model as an "AND addition relationships" and an "OR addition relationships" between the combinations of the input units; and performing optimization, such that the "AND addition relationships" and the "OR addition relationships" are sparser. The advantages of the present invention lie in that a quantification method for interpreting the interaction modeled by a black-box artificial intelligence model is provided, and in comparison with previous research, a sparser and concise interactive interpretation can be obtained.
Resumen de: EP4700664A2
A system and method includes receiving a tuning work request for tuning an external machine learning model; implementing a plurality of distinct queue worker machines that perform various tuning operations based on the tuning work data of the tuning work request; implementing a plurality of distinct tuning sources that generate values for each of the one or more hyperparameters of the tuning work request; selecting, by one or more queue worker machines of the plurality of distinct queue worker machines, one or more tuning sources of the plurality of distinct tuning sources for tuning the one or more hyperparameters; and using the selected one or more tuning sources to generate one or more suggestions for the one or more hyperparameters, the one or more suggestions comprising values for the one or more hyperparameters of the tuning work request.
Resumen de: WO2024218535A1
The disclosure relates to a ML-based method for determining a CCE aggregation level for a UE in a PDCCH. The method comprises obtaining RBS traces. The method comprises training, using first data obtained from the traces, a machine learning model to predict a probability of discontinuous transmission (DTX) "isDTX probability". The method comprises inputting second data obtained from the traces into the machine learning model, obtaining the isDTX probability and expanding the second data with the isDTX probability. The method comprises, for each of a plurality of probability thresholds (PTs) and for each of a plurality of strategies, selecting a data having an isDTX probability greater or equal to the PT and best satisfying the strategy and using the data to train a classifier. The method comprises selecting one classifier and using the classifier for determining the CCE aggregation level for the UE in the PDCCH.
Resumen de: US2024354655A1
Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training a machine learning model. In one aspect, a method comprises: generating a set of candidate batches of model inputs; generating, for each candidate batch of model inputs, a respective score for the candidate batch of model inputs that characterizes: (i) an uncertainty of the machine learning model in generating predicted labels for the model inputs in the candidate batch of model inputs, and (ii) a diversity of the model inputs in the candidate batch of model inputs; and selecting the current batch of model inputs from the set of candidate batches of model inputs based on the scores; and training the machine learning model on at least the current batch of model inputs.
Resumen de: EP4700611A1
The present disclosure provides a method of generating a balanced training dataset for a machine learning model in one aspect, the method including: receiving flight sensor data corresponding to a plurality of flights, and applying one or more criteria to the flight sensor data to generate a training dataset including a plurality of first instances corresponding to flights of the plurality of flights. The method further includes assigning, using component fault data, respective labels to the plurality of first instances, and generating, for groups of one or more labels of the respective labels, a respective plurality of flight series. Each flight series includes a respective sequence of second instances that is based on some of the plurality of first instances, and that concludes with a second instance that is assigned a label included in the group.
Resumen de: WO2026039830A1
A method includes determining, by a computer processor, a set of optimal active device parameters for an active device of an integrated circuit using an optimization engine, a first machine-learning model, and a set of target circuit performance metrics; determining, by the computer processor, a set of optimal passive network parameters for a passive network of the integrated circuit using the optimization engine, a second machine-learning model, and the set of target circuit performance metrics; and generating, by the computer processor, a circuit design for the integrated circuit based on the set of optimal active device parameters and the set of optimal passive network parameters.
Resumen de: US20260051409A1
Systems and methods for predicting esophageal adenocarcinoma (EAC) and gastric cardia adenocarcinoma (GCA) using machine learning are provided. An example system may obtain an electronic health record (EHR) dataset, identify missing values in the EHR dataset, and generate imputed values for the missing values using simple random sampling imputation. The system may train a model using an extreme gradient boosting algorithm and a training dataset including the EHR dataset to generate a trained model including multiple decision trees. Training the model includes tuning the model to achieve a greatest value of an area under a receiver operating characteristic curve associated with the model. The system may obtain a patient EHR dataset, generate a prediction associated with a risk of EAC and/or GCA by applying the trained model to the patient EHR dataset, and provide the prediction to a computing device to determine a patient treatment protocol.
Resumen de: US20260049833A1
An apparatus and method for transport management is presented. The apparatus includes a memory communicatively connected to a processor to output routing data of transport entities as a function of aggregated transport data, wherein the outputting comprises: receive transport data and bound parameters of a transport from a carrier device; iteratively train an aggregation machine-learning model to combine the transport data, wherein the training comprises generating an aggregation training data correlating the transport data as inputs and aggregated transport data as outputs; modify a characteristic of the transport; update the aggregated transport data based on the modification of the characteristic of the transport; retrain the aggregation machine-learning model as a function of the updated aggregated transport data; generate the routing data, wherein the routing data comprises instructions to further modify the characteristic of the transport; and automatically change the characteristic of the transport based on the routing data.
Resumen de: US20260050503A1
Methods and systems are for generating real-time resolutions of errors arising from user submissions, computer processing tasks, etc. For example, the methods and systems described herein recite improvements for detecting errors in one or more user submissions and providing resolutions in real-time. To provide these improvements, the methods and systems use a machine learning model that is trained to return probability error scores based on a plurality of variables. By using the multivariate approach, the methods and systems may produce a highly accurate detection.
Resumen de: US20260050504A1
A method and system for detecting harmful shift in a machine learning (ML) model associated with unlabeled data utilized by the ML model. The method includes implementing an error estimator model with regressor algorithm and training the error estimator model with a first portion of a labeled calibration dataset. The method further includes computing, by the trained error estimator model, an error estimation threshold based on a second portion of the labeled calibration dataset; predicting a performance of the ML model by detecting the harmful shift via the trained error estimator model analyzing the unlabeled data over a predetermined time period and determining a proportion of estimated errors associated with the unlabeled data over the predetermined time period that exceeds the error estimation threshold; and generate an alert when the proportion of estimated errors exceeds the error estimation threshold.
Resumen de: US20260050582A1
Systems and methods for generating a parser from a log file including: receiving a log file, wherein the log file is a structured text file of a plurality of data elements; invoking a machine learning model to: process the log file to identify name-value-pairs from the data elements; classify the log file as being associated with a schema based in part on the name-value pairs; map a first name-value pair to a first input field of the schema based on characteristics of the first name-value pair; determine a confidence level associated with mapping the first name-value pair to the first input field; and when the confidence level for mapping the first name-value pair exceeds a threshold, provide the first name-value pair to the first input field; and generating a parser from the plurality of input fields of the schema.
Resumen de: WO2026039018A1
The present invention relates to a system (1) which enables base station energy data in the form of cost and data traffic to be analyzed via random forest machine learning techniques, base stations suspected of fraud in the analysis to be detected and the operation to be notified.
Resumen de: WO2026034877A1
The present invention relates to a method by which a terminal selects a beam to be reported in machine learning-based beam management, the method comprising the steps of: receiving, from a base station, configuration information of a measurement resource set and M number of report beams for AI/ML inference; determining, on the basis of measurement values of the measured beams, a beam to be reported; and transmitting the determined beam information to the base station, wherein, when the number of candidate beams to be reported exceeds M due to tie beams having the same or similar measurement values, the final beams to be reported are determined by excluding at least one from among same through a tie beam processing operation.
Resumen de: WO2026035304A1
System and methods for computer modeling in medicine. A sort of period table of medical models is described for personalized diagnostics, prognostics and therapeutics, including at least 80 major categories of medical models. Generative artificial intelligence and geometric deep learning techniques, and algorithms including 2D and 3D graph machine learning and GenAI algorithms, are described, tailored and applied to diagnostic disease description, prognostic prediction and therapeutic development and management, including generation of novel synthetic drugs. The AI and machine learning techniques and algorithms are applied to understand each individual's genetic, RNA and protein anomalies that represent the source of many unique patient diseases. AI-enabled software agents assist physicians and researchers in building patient medical models. Several personalized medicine applications of individualized medical modeling include cardiovascular
Resumen de: WO2026035369A2
Described herein is a computer-implemented method for training a machine learning model for predicting one or more physiochemical properties of a nanoparticle. In some instances, the method includes receiving a set of data including nanoparticles, generating a descriptor and fingerprint for each nanoparticle in the set of data, creating a plurality of sets of training data and a plurality of sets of test data, training a machine learning model of an artificial intelligence (AI) system using the plurality of sets of training data, and predicting a property of the drug-excipient pair of the plurality of sets of test data based on drug-excipient pairs and property values of each drug-excipient pair of the plurality of sets of training data. Also described herein are excipients conjugated to a polyethylene glycol moiety, or pharmaceutically acceptable salts thereof. In some instances, the excipients may be used to form nanoparticles with a therapeutic compound.
Resumen de: WO2026030790A1
The disclosure relates to machine learning, more particular to training a machine learning model comprising a classical sub-model and a quantum sub-model. A classical processor, receives, from a classical device, an intermediate classical output from a classical sub-model of the machine learning model and configures quantum gates of a quantum circuit based on the intermediate classical output. A quantum processor executes the quantum circuit using the quantum gates to determine a quantum circuit output, the quantum circuit being configured to represent a quantum sub-model of the machine learning model. The classical processor adapts the quantum circuit output to determine a further classical output; updates the quantum sub-model based on minimising a loss involving the further classical output; and transmits a loss propagation value to the classical device, to cause the classical device to update the classical sub-model based on the loss propagation value, thereby training the machine learning model.
Resumen de: US20260043656A1
Methods, systems, and apparatus, including computer programs encoded on computer storage media, for determining elements of a shipping network. One of the methods includes obtaining environmental input data, wherein the environmental input data includes weather forecast data; providing the environmental input data to a circulation model; and providing output environmental condition from the circulation model to a machine learning model trained to generate a route for a ship.
Resumen de: US20260042011A1
The disclosed concepts relate to training a machine learning model to provide help sessions during a video game. For instance, prior video game data from help sessions provided by human users can be filtered to obtain training data. Then, a machine learning model can be trained using approaches such as imitation learning, reinforcement learning, and/or tuning of a generative model to perform help sessions. Then, the trained machine learning model can be employed at inference time to provide help sessions to video game players.
Nº publicación: US20260045348A1 12/02/2026
Solicitante:
RXASSURANCE CORP D/B/A OPISAFE [US]
RxAssurance Corporation (d/b/a OpiSafe)
Resumen de: US20260045348A1
Examples described herein generally relate to recommending drug dosage reductions for a patient. A computer system may generate an initial non-linear glide path of recommended dosages starting at an initial dosage of a drug for a patient and ending at a goal dosage at an estimated time of arrival. The system may receive periodic patient monitoring including at least one drug withdrawal scale score, anxiety scale score, and indicated side effect. The system may determine, using one or more machine learning algorithms, a revised glide path based on a data record for the patient, the at least the drug withdrawal scale score and the at least one anxiety scale score for the patient. The system may recommend at least one medication or therapy for the indicated side effect. The system may determine a prescription adjustment based on the revised glide path.