Resumen de: US2025384290A1
Computer-implemented systems and methods including language models for explaining and resolving code errors. A computer-implemented method may include: receiving one or more user inputs identifying a data set and providing a first user request to perform a first task based on at least a portion of the data set, wherein the data set is defined by an ontology; using a large language model (“LLM”) to identify a first machine learning (“ML”) model type from a plurality of ML model types; using the LLM to identify a first portion of the data set to be used to perform the first task; using the LLM to generate a first ML model training configuration; and executing the first ML model training configuration to train a first custom ML model, of the first ML model type, to perform the first task.
Resumen de: US2025384356A1
Systems and methods to utilize a machine learning model registry are described. The system deploys a first version of a machine learning model and a first version of an access module to server machines. Each of the server machines utilizes the model and the access module to provide a prediction service. The system retrains the machine learning model to generate a second version. The system performs an acceptance test of the second version of the machine learning model to identify it as deployable. The system promotes the second version of the machine learning model by identifying the first version of the access module as being interoperable with the second version of the machine learning model and by automatically deploying the first version of the access module and the second version of the machine learning model to the plurality of server machines to provide the prediction service.
Resumen de: US2025384223A1
Machine learning (ML) systems and methods for fact extraction and claim verification are provided. The system receives a claim and retrieves a document from a dataset. The document has a first relatedness score higher than a first threshold, which indicates that ML models of the system determine that the document is most likely to be relevant to the claim. The dataset includes supporting documents and claims including a first group of claims supported by facts from more than two supporting documents and a second group of claims not supported by the supporting documents. The system selects a set of sentences from the document. The set of sentences have second relatedness scores higher than a second threshold, which indicate that the ML models determine that the set of sentences are most likely to be relevant to the claim. The system determines whether the claim includes facts from the set of sentences.
Resumen de: US2025383882A1
A system comprises an on-chip memory (OCM) configured to maintain blocks of data used for a matrix operation and result of the matrix operation, wherein each of the blocks of data is of a certain size. The system further comprises a first OCM streamer configured to stream a first matrix data from the OCM to a first storage unit, and a second OCM streamer configured to stream a second matrix data from the OCM to a second storage unit, wherein the second matrix data is from an unaligned address of the OCM that is a not a multiple of the certain size. The system further comprises a matrix operation block configured to retrieve the first matrix data and the second matrix data from the first storage unit and the second storage unit, respectively, and perform the matrix operation based on the first matrix data and the second matrix data.
Resumen de: US2025384350A1
Systems, methods, and computer readable media related to: training an encoder model that can be utilized to determine semantic similarity of a natural language textual string to each of one or more additional natural language textual strings (directly and/or indirectly); and/or using a trained encoder model to determine one or more responsive actions to perform in response to a natural language query. The encoder model is a machine learning model, such as a neural network model. In some implementations of training the encoder model, the encoder model is trained as part of a larger network architecture trained based on one or more tasks that are distinct from a “semantic textual similarity” task for which the encoder model can be used.
Resumen de: US2025385007A1
Provided is a process including: obtaining, with one or more processors, a set of data comprising a plurality of patient records, selecting a subset of the plurality of parameters for inputs into a machine learning system, generating a classifier using the machine learning system based on the training data and the subset of the plurality of parameters for inputs; receiving, with one or more processors, patient record of a first user; performing an analysis, with one or more processors, to identify acoustic measures from a voice sample of the first user.
Resumen de: US2025384312A1
A distributed inference engine system that includes multiple inference engines is disclosed. A particular inference engine of the multiple inference engines may receive a prompt and its associated data, and divide the data into multiple data portions that are distributed to the multiple inference engines. Operating in parallel, and using a machine-learning model and respective data portions, the multiple inference engines generate an initial token. The multiple inference engines also generate, in parallel and using corresponding portions of the machine-learning model and the initial token, a subsequent token.
Resumen de: WO2025254902A1
A distributed inference engine system that includes multiple inference engines is disclosed. A particular inference engine of the multiple inference engines may receive a prompt and its associated data, and divide the data into multiple data portions that are distributed to the multiple inference engines. Operating in parallel, and using a machine-learning model and respective data portions, the multiple inference engines generate an initial token. The multiple inference engines also generate, in parallel and using corresponding portions of the machine-learning model and the initial token, a subsequent token.
Resumen de: WO2025252418A1
The present subject matter relates to a method for proving a statement descriptive of a functionality of an electronic circuit using a quantum computer and a proof assistant. The method comprises: encoding a current proof state into a vector of real numbers of a fixed length, the current proof state defining a task for proving at least part of the statement, encoding the vector into a quantum state of a quantum system of the quantum computer, using the quantum state as an input quantum state by a quantum machine learning model for providing an output quantum state whose measurement represents a proof step for the defined task, measuring the output quantum state, thereby obtaining the proof step for the defined task, providing the proof step to the proof assistant, in response to providing the proof step, receiving a next proof state from the proof assistant.
Resumen de: WO2025254972A1
In some aspects, a network node obtains one or more first quantities corresponding to positioning, sensing, or both machine learning model inputs, wherein the one or more first quantities comprise one or more channel measurements, quality indicators of the one or more channel measurements, timestamps of the one or more channel measurements, or any combination thereof, obtains one or more second quantities corresponding to positioning, sensing, or both machine learning model outputs, wherein the one or more second quantities comprise one or more ground truth labels, quality indicators of the one or more ground truth labels, timestamps of the one or more ground truth labels, or any combination thereof, and determines whether the one or more first quantities are associated with the one or more second quantities based on one or more association rules.
Resumen de: US2025378507A1
A crop prediction system performs various machine learning operations to predict crop production and to identify a set of farming operations that, if performed, optimize crop production. The crop prediction system uses crop prediction models trained using various machine learning operations based on geographic and agronomic information. Responsive to receiving a request from a grower, the crop prediction system can access information representation of a portion of land corresponding to the request, such as the location of the land and corresponding weather conditions and soil composition. The crop prediction system applies one or more crop prediction models to the access information to predict a crop production and identify an optimized set of farming operations for the grower to perform.
Resumen de: US2025379798A1
The disclosure provides a device capability discovery method and a wireless communication device. The wireless communication device transmits a capability message of the wireless communication device to a source device having a pool of machine learning (ML) models. The capability message shows whether the wireless communication device is capable of executing multiple ML models. The wireless communication device downloads if needed, and activates one or more ML models from a subset in the pool of ML models. The subset in the pool of ML models matches the capability message of the wireless communication device.
Resumen de: US2025377647A1
Disclosed herein are AI-based platforms for enabling intelligent orchestration and management of power and energy. In various embodiments, a machine learning system is trained on a set of energy intelligence data and deployed on an edge device, wherein the machine learning system is configured to receive additional training by the edge device to improve energy management. In some embodiments, the energy management includes management of generation of energy by a set of distributed energy generation resources, management of storage of energy by a set of distributed energy storage resources management of delivery of energy by a set of distributed energy delivery resources, management of delivery of energy by a set of distributed energy delivery resources, and/or management of consumption of energy by a set of distributed energy consumption resources.
Resumen de: US2025378309A1
A computer implemented method for filtering user feedback and/or output of a machine learning model, comprising: providing a filter for filtering user feedback and/or output of a machine learning model; receiving user feedback and/or output of the machine learning model; filtering the user feedback and/or the output with the filter and determining a filtering result, wherein the filtering result comprises at least a detected error; providing the filtering result for further processing.
Resumen de: WO2025252383A1
The present subject matter relates to a method as follows. A current proof state may be encoded into a vector of real numbers of a fixed length. The current proof state defines a task for proving at least part of the statement. The vector may be encoded into a quantum state of a quantum system of the quantum computer. The quantum state may be used as an input quantum state by a quantum machine learning model for providing by the quantum machine learning model an output quantum state, wherein the measurement of the output quantum state represents a proof step for the defined task. The proof step may be provided to the proof assistant. In response to providing the proof step, a next proof state may be received from the proof assistant. It may be determined whether the received proof state indicates that the proof is completed. In response to determining that the proof is not completed, the received proof state may be used as the current proof state for repeating the method.
Resumen de: WO2025255369A1
Methods and systems are disclosed for selectively modifying the behavior of a pre-trained language model with respect to a designated task. A task-specific subspace is identified by training low-rank matrices for selected layers of the trained machine learning model, while freezing other parameters. The identified subspace is used to either attenuate or enhance task contributions by adjusting one or more model weight matrices. In some embodiments, overlapping subspaces are discriminated to preserve related task performance. These operations can be performed without access to original training data or full retraining. Some aspects of the disclosed techniques can allow efficient knowledge removal or addition in language models while minimizing adverse effects on unrelated tasks.
Resumen de: US2025378389A1
Automatic activation and configuration of robotic process automation (RPA) workflows using machine learning (ML) is disclosed. One or more parts of an RPA workflow may be turned on or off based on one or more probabilistic ML models. RPA robots may be configured to modify parameters, determine how much of a certain resource to provide, determine more optimal thresholds, etc. Such RPA workflows implementing ML may thus be hybrids of both deterministic and probabilistic logic, and may learn and improve over time by retraining the ML models, adjusting the confidence thresholds, using local/global confidence thresholds, providing or adjusting modifiers for the local confidence thresholds, implement a supervisor system that monitors ML model performance, etc.
Resumen de: GB2641642A
A spectrum forecasting system, including a spectrum receiver to monitor a radio spectrum and collect live signal data; a spectrum forecasting deep learning model communicatively coupled to the spectrum receiver to receive the live signal data as input data, infer future vacancies, and chart a path through the future vacancies for assignment. A spectrum forecasting method, including receiving live signal data from a radio spectrum; inputting the live signal data into a spectrum forecasting deep learning model; inferring, using the spectrum forecasting deep learning model, future vacancies; and charting, using the spectrum forecasting deep learning model, a path through the future vacancies for assignment.
Resumen de: CN120660086A
The invention discloses a method, a system and a computer system for classifying streaming data at an edge device. The method includes obtaining streaming data of a file at the edge device, processing a set of chunks associated with the streaming data of the file using a machine learning model, and classifying the file at the edge device prior to processing all content of the file.
Resumen de: EP4660892A1
A model generation apparatus includes a parameter identification unit that identifies a parameter affecting misclassification of misclassification data incorrectly classified by a trained classification model, among parameters of the classification model for each type of the misclassification, based on the misclassification data incorrectly classified by the trained classification model; a parameter correction unit that generates a correction parameter obtained by correcting the parameter identified by the parameter identification unit for each type of the misclassification; and a parameter integration unit that generates an integrated model including an integrated parameter obtained by integrating the correction parameters for each type of the misclassification.
Resumen de: EP4660880A1
Computer implemented method for filtering user feedback and/or output of a machine learning model, comprising: providing a filter for filtering user feedback and/or output of a machine learning model; receiving user feedback and/or output of the machine learning model; filtering the user feedback and/or the output with the filter and determining a filtering result, wherein the filtering result comprises at least a detected error; providing the filtering result for further processing.
Resumen de: US2025371419A1
According to an embodiment, a method is proposed carried out by a computer system for tuning hyperparameters in a machine learning model, the computer system having a processing unit designed to execute a plurality of processes in parallel. The method comprising executing a plurality of independent hyperparameter search methods in different parallel processes of the processing unit, the results of the tests of the combinations of hyperparameters being stored in a memory in the computer system shared among the various processes, and wherein each process assesses whether a combination of hyperparameters searched for has already been tested by another process based on the results of tests stored in memory, and takes into account, in its own test history, the results of tests stored in the memory if the combination of hyperparameters searched for has already been tested.
Resumen de: US2025371491A1
A dynamic supply chain planning system for analysis of historical lead time data that uses machine learning algorithms to forecast future lead times based on historical lead time data, weather data and financial data related to locations and dates within the supply chain.
Resumen de: US2025369762A1
In some aspects, the techniques described herein relate to a method including: receiving, by a collaboration service, location data of a user, wherein the location data includes a timestamp; verifying, by the collaboration service and based on a digital itinerary associated with the user, a location of the user; processing data from a data profile associated with the user as input data to a machine learning model; receiving, by the collaboration service and as output of the machine learning model, a plurality of predicted travel objective classifications; determining, by the collaboration service, a plurality of travel objectives, wherein each of the plurality of travel objectives is associated with one of the plurality of predicted travel objective classifications; determining, by the collaboration service, a travel objective within a predefined proximity of the location of the user; and displaying the travel objective to the user via a planning interface.
Nº publicación: US2025371429A1 04/12/2025
Solicitante:
PAYPAL INC [US]
PayPal, Inc
Resumen de: US2025371429A1
Techniques are disclosed in which a computer system receives, from a plurality of user computing devices, a plurality of device-trained models and obfuscated sets of user data stored at the plurality of user computing devices, where the device-trained models are trained at respective ones of the plurality of user computing devices using respective sets of user data prior to obfuscation. In some embodiments, the server computer system determines similarity scores for the plurality of device-trained models, wherein the similarity scores are determined based on a performance of the device-trained models. In some embodiments, the server computer system identifies, based on the similarity scores, at least one of the plurality of device-trained models as a low-performance model. In some embodiments, the server computer system transmits, to the user computing device corresponding to the low-performance model, an updated model.