Absstract of: US2025322366A1
The present disclosure generally relates to a computer device, method and system utilizing machine learning for capturing and analyzing profile data communicated across a computing environment including but not limited to: each user's profile, online behaviors and career progression path and provides dynamic recommendations of online actions to be performed to reach a desired target state.
Absstract of: US2025322958A1
Techniques are disclosed for using feature delineation to reduce the impact of machine learning cardiac arrhythmia detection on power consumption of medical devices. In one example, a medical device performs feature-based delineation of cardiac electrogram data sensed from a patient to obtain cardiac features indicative of an episode of arrhythmia in the patient. The medical device determines whether the cardiac features satisfy threshold criteria for application of a machine learning model for verifying the feature-based delineation of the cardiac electrogram data. In response to determining that the cardiac features satisfy the threshold criteria, the medical device applies the machine learning model to the sensed cardiac electrogram data to verify that the episode of arrhythmia has occurred or determine a classification of the episode of arrhythmia.
Absstract of: US2025322289A1
A smart shopping cart includes a load sensor to measure the weight of items added to the cart. To avoid waiting for the load sensor to converge, a detection system predicts the weight of items added to the storage area of a smart shopping cart based on the shape of a load curve output by the load sensor when an item is added to the cart. The detection system receives load data from the load sensor, detects that an item was added to the storage area of the shopping cart during a time period and identifies a set of load measurements captured by the load sensor during the time period. The set of load measurements comprise a load curve, to which the detection system applies a weight prediction model to generate a predicted weight of the added item.
Absstract of: US2025322952A1
A computer-implemented method for control of a surgical device includes accessing raw data captured by a sensor of the surgical device during a procedure, filtering the raw data with a filter, generating a difference data based on a difference between the raw data and the filtered data, generating zero-crossing data based on determining a point in time where an amplitude of the difference data last crossed from a non-zero amplitude value through a zero amplitude value to a non-zero amplitude value of the opposite sign, providing the zero-crossing data as an input to a machine learning classifier, and predicting a probability of an end stop point based on the machine learning classifier. The end stop point includes a point in time where a knife of the surgical device ceases to cut tissue.
Absstract of: US2025322210A1
A method of performing sustainability optimization includes processing a set of inputs using a trained machine learning model to generate a set of outputs, wherein the set of inputs correspond to configuration parameters of a process configured to be performed on a physical machine, and wherein the set of outputs includes a plurality of predicted waste metrics resulting from performance of the process on the physical machine. The method further includes optimizing the set of inputs and the set of outputs for meeting sustainability constraints in view of process constraints and outputting a recommendation for operating the process on the physical machine based on the optimized set of inputs and set of outputs, for avoiding a risk of failure to operate the process, while meeting the sustainability constraints and the process constraints.
Absstract of: US2025322037A1
A method for assessing and/or monitoring a process and/or a multi-axis machine includes recording at least one data time series, wherein the at least one data time series includes at least one channel describing at least one parameter of the process and/or of the multi-axis machine, and wherein the data time series is caused by the process. An interpretable result is determined by a machine learning algorithm based on the at least one data time series, wherein the result describes a classification value of a state in the process and/or of a state of the multi-axis machine. A warning is output when determining the result if the classification value of the state in the process and/or of the state of the multi-axis machine is assigned to a value of an error class that is in a warning range or corresponds to a warning range, and an all-clear signal is output if the classification value of the state in the process and/or of the state of the multi-axis machine is assigned to a value of an error class that is in an all-clear range or corresponds to an all-clear range.
Absstract of: US2025322272A1
A multimodal content management system having a block-based data structure can include a question and answer (Q&A) assistant (e.g., a chatbot). The system can receive a natural language prompt and generate a result set. The result set can include blocks (e.g., blocks that include responsive content, including content in different modalities). The system can apply a set of authority signals to items in the result set to generate a ranked result set. The authority signals can be generated using aspects of the block-based data structure, such as block properties. The system can cause the Q&A assistant to return a set of hyperlinks to the ranked result set items. The hyperlinks can be operable to enable navigation to block content without closing the Q&A assistant.
Absstract of: US2025322269A1
Systems and methods for implementing a threat model that classifies contextual events as threats. The method can include: accessing a threat model; identifying a set of contextual events, wherein each contextual event comprises a set of semantic primitives predicted from a plurality of sensor streams; and determining a threat level for each contextual event based on threat probabilities.
Absstract of: US2025322236A1
Methods, systems, and apparatus, including computer programs encoded on computer storage media, for augmenting machine learning language models using search engine results. One of the methods includes obtaining question data representing a question; generating, from the question data, a search engine query for a search engine; obtaining a plurality of documents identified by the search engine in response to processing the search engine query; generating, from the plurality of documents, a plurality of conditioning inputs each representing at least a portion of one or more of the obtained documents; for each of a plurality of the generated conditioning inputs, processing a network input generated from (i) the question data and (ii) the conditioning input using a neural network to generate a network output representing a candidate answer to the question; and generating, from the network outputs representing respective candidate answers, answer data representing a final answer to the question.
Absstract of: US2025322316A1
A candidate content item is identified for integration into a content collection. The candidate content item is associated with a first value. Using at least one machine learning model, a select value and a skip value are automatically generated for the candidate content item. The select value indicates a likelihood that the user will select the candidate content item, and the skip value indicates a likelihood that the user will bypass the candidate content item. A second value is generated for the candidate content item based on the first value, the select value, and the skip value. The candidate content item is automatically selected from a plurality of candidate content items based on the second value meeting at least one predetermined criterion. The selected candidate content item is then automatically integrated into the content collection, which is caused to be presented on a device of a user.
Absstract of: US2025322312A1
Techniques are disclosed for revising training data used for training a machine learning model to exclude categories that are associated with an insufficient number of data items in the training data set. The system then merges any data items associated with a removed category into a parent category in a hierarchy of classifications. The revised training data set, which includes the recategorized data items and lacks the removed categories, is then used to train a machine learning model in a way that avoids recognizing the removed categories.
Absstract of: US2025322342A1
Mitigation of temporal generalization losses a target machine learning model is disclosed. Mitigation can be based on identifying, removing, modifying, transforming, etc., features, explanatory variables, models, etc., that can have an unstable relationship with a target outcome over time. Implementation of a more stable representation can be initiated. Temporal stability measures (TSMs) for one or more model feature(s) can be determined based on one or more variable performance metrics (VPMs). A group of one or more VPMs can be selected based on features of a model in either a development or production environment. Model feature modification can be recommended based on a TSM, which can prune a feature, transform a feature, add a feature, etc. Temporal stability information can be presented, e.g., via a dashboard-type user interface. Models can be updated based on mutations of a model comprising a feature modification(s), including competitive champion/challenger model updating.
Absstract of: AU2024243389A1
Disclosed are systems, methods, and devices for correcting or otherwise cleaning sensor data. Sensor readings and metadata or other information about the sensor readings can be collected, and one or more detection rules (e.g., machine learning models or other detection rules) can be automatically generated for modifying subsequent sensor data. Sensor readings can be refined or supplemented by applying applicable detection rules.
Absstract of: EP4632619A1
A method of performing sustainability optimization includes processing a set of inputs using a trained machine learning model to generate a set of outputs, wherein the set of inputs correspond to configuration parameters of a process configured to be performed on a physical machine, and wherein the set of outputs includes a plurality of predicted waste metrics resulting from performance of the process on the physical machine. The method further includes optimizing the set of inputs and the set of outputs for meeting sustainability constraints in view of prospcess constraints and outputting a recommendation for operating the process on the physical machine based on the optimized set of inputs and set of outputs, for avoiding a risk of failure to operate the process, while meeting the sustainability constraints and the process constraints.
Absstract of: GB2640229A
An apparatus 100 comprising: means for receiving a network configuration 106 derived from a plurality of machine-learning, ML models, each ML model directed towards a respective one or more radio access network, RAN functionalities; means for receiving a plurality of predicted performance, PM measurement counters output 108 from a plurality of ML performance measurement models, each ML prediction measurement model corresponding to one of the plurality of ML models; and means for processing, using a common ML performance measurement counter model 102, the network configuration and the plurality of predicted performance measurement counters to determine a model output comprising, for one or more performance measurement counters, a respective plurality of impact scores 112, wherein each impact score is indicative of a predicted impact of a corresponding ML model in the plurality of ML models on the respective performance measurement counter of said impact score for the network configuration. The apparatus may further comprise means for executing the plurality of ML models on respective measurement data to generate a plurality of respective RAN functionality predictions; and means for generating, from the plurality of respective RAN functionality predictions, the network configuration.
Absstract of: WO2025209965A1
The invention concerns a computer-implemented method for predicting performance parameter values of at least one individual gas separation stage, the method comprising: - receiving (162) a set of data points (21), each data point comprising operating parameter values indicative of a configuration or state of a separation stage of a gas separation plant (1800) and comprising performance parameter values obtained by simulating the operation of said separation stage given the operating parameter values of said data point; - using (164) the received set of data points as a training data set in a machine learning process for generating a trained predictive model (1704-1708, 2304); - receiving (166) input parameter values being indicative of one or more operation parameter values for the at least one separation stage; - using (168) the trained model for predicting performance parameter values of the at least one individual separation stage as a function of the input parameter values; and - outputting (170) the predicted performance parameter values for use in the design or control of the single separation stage or of the plant comprising the same.
Absstract of: WO2025210109A1
This specification relates to the execution of machine-learning models on user devices. According to a first aspect of this specification, there is described apparatus comprising: means for receiving a network configuration derived from a plurality of machine-learning models, each machine-learning model directed towards a respective one or more radio access network functionalities; means for receiving a plurality of predicted performance measurement counters output from a plurality of machine-learning performance measurement models, each machine-learning prediction measurement model corresponding to one of the plurality of machine-learning models; and means for processing, using a common machine-learning performance measurement counter model, the network configuration and the plurality of predicted performance measurement counters to determine a model output comprising, for one or more performance measurement counters, a respective plurality of impact scores. Each impact score is indicative of a predicted impact of a corresponding machine-learning model in the plurality of machine-learning models on the respective performance measurement counter of said impact score for the network configuration.
Absstract of: US2025315740A1
Methods, systems, and computer program products are provided for ensemble learning. An example system includes at least one processor configured to: (i) generate a rejection region for each baseline model of a set of baseline models (ii) generate a global rejection region based on the rejection regions of each baseline model; (iii) train an ensemble machine learning model; (iv) update, based on a baseline model predictive performance metric for each baseline machine learning model, the set of baseline machine learning models; and (iv) repeat (i)-(iv) until there is a single baseline model in the set of baseline models or a predictive performance or global acceptance ratio of the ensemble model satisfies a threshold.
Absstract of: US2025315339A1
A system and method performing fault and event analysis in electrical substations comprises receiving a disturbance record triggered by an intelligent electronic device (IED) at an electrical substation, pre-processing the received disturbance record to extract at least one variable time series data of plurality of electrical parameters, generating a causality matrix based on the extracted at least one variable time series data by applying causal analysis, predicting, using a Machine learning (ML) module, a fault type at least based on the causality matrix, retrieving, from a knowledge database, a plurality of probable causes corresponding to the predicted fault type, determining at least one exact cause from the plurality of probable causes based on the causal pattern, and providing the fault type, the plurality of probable causes, and the at least one exact cause to a user.
Absstract of: US2025317224A1
The present disclosure provides a system and a method for generating a path loss propagation model through machine learning. The system generates a path loss propagation model for fifth generation (5G) networks for network planning. The path loss model predicts a reference signal received power/signal to noise interference ratio (RSRP/SINR) by leveraging a fourth generation (4G) user data.
Absstract of: US2025316377A1
An example embodiment may involve obtaining, by a computing system, an observation of demographic values of an individual, vital sign values of the individual, and blood test values of the individual: applying, by the computing system, a machine learning model to the observation, wherein the machine learning model was trained with a training data set, wherein the training data set contained observations of corresponding demographic values, vital sign values, blood test values, and either urine albumin-to-creatinine ratio (UACR) values or urine protein-to-creatinine ratio (UPR) values for a plurality of individuals, and wherein the machine learning model is configured to provide predictions of whether further observations are indicative of undiagnosed albuminuria or proteinuria; and providing, by the computing system, a prediction of whether the individual exhibits undiagnosed albuminuria or proteinuria based on the observation.
Absstract of: US2025315681A1
This disclosure relates to artificial intelligence (AI) and machine learning networks for predicting or determining demand metrics across multiple channels. An analytics platform can receive channel events from multiple channels corresponding to geographic areas, and channel features related to demand conditions in the channels can be extracted from the channel events. During a training phase, the channel features can be accumulated into one or more training datasets for training one or more demand prediction models. The one or more demand prediction models can be trained to predict or determine demand metrics for each of the channels. The demand metrics can indicate or predict demand conditions based on the current conditions in the channels and/or based on future, predicted conditions in the channels. Other embodiments are disclosed herein as well.
Absstract of: US2025315674A1
Methods and systems for inducing model shift in a malicious computer's machine learning model is disclosed. A data processor can determine that a malicious computer uses a machine learning model with a boundary function to determine outcomes. The data processor can then generate transition data intended to shift the boundary function and then provide the transition data to the malicious computer. The data processor can repeat generating and providing the transition data, thereby causing the boundary function to shift over time.
Absstract of: US2025315738A1
A network operation system and method accesses a training dataset for a network operation predictive model including historical network operation records and historical decision records, generates an inferred protected class dataset by executing a protected class demographic model, executes an algorithmic bias model using as input the historical decision records and the inferred protected class dataset to generate one or more fairness metrics, executes, based on the fairness metrics, a bias adjustment model using as input the historical decision records and the inferred protected class dataset to generate an adjusted training dataset, trains the network operation predictive model using as input the adjusted training dataset, receives an electronic request for a network operation, executes the network operation predictive model using as input at least one attribute of the electronic request for the network operation, and executes the network operation based on a prediction of the network operation predictive model.
Nº publicación: US2025315798A1 09/10/2025
Applicant:
FIIX INC [CA]
FIIX INC
Absstract of: US2025315798A1
An industrial work order analysis system applies statistical and machine learning analytics to both open and closed work orders to identify problems and abnormalities that could impact manufacturing and maintenance operations. The analysis system applies algorithms to learn normal maintenance behaviors or characteristics for different types of maintenance tasks and to flag abnormal maintenance behaviors that deviate significantly from normal maintenance procedures. Based on this analysis, embodiments of the work order analysis system can identify unnecessarily costly maintenance procedures or practices, as well as predict asset failures and offer enterprise-specific recommendations intended to reduce machine downtime and optimize the maintenance process.