Resumen de: US2025343816A1
In various examples there is a method of empirically measuring a level of security’ of a training pipeline. The training pipeline is configured to train machine learning models using confidential training data. The method comprises storing a representation of a joint distribution of false positive rate and false negative rate of membership inference attacks on a plurality of machine learning models trained using the training pipeline. The method uses the representation to compute a posterior distribution of the level of security’ from observations of the membership inference attack on the plurality’ of machine learning models trained using the training pipelines. A confidence interval of the level of security is computed from the posterior distribution and the confidence interval is stored.
Resumen de: US2025342936A1
A system for generating a lifestyle-based disease prevention plan, the system including a computing device configured to receive at least a user biomarker input, produce a user profile as a function of the at least a user biomarker input, and generate a lifestyle-based disease prevention plan as a function of the user profile including training a machine learning process with a lifestyle training data set where the lifestyle training data set further comprises lifestyle elements correlated to a plurality of outputs containing diseases prevented and producing the lifestyle-based disease prevention plan as a function of the user profile and machine learning process.
Resumen de: EP4645322A1
Comprising at least one processor obtaining a combination of information identifying each of the raw materials received from the user and the amount of each of the raw materials, and obtaining a predicted value of a physical property of the property name to be predicted for a composition comprising each of the raw materials by inputting into a first machine learning model at least one of the chemical fingerprints, SMILES strings or chemical graph structure data or product name or substance name corresponding to each of the raw materials and the amount of each of the raw materials, or by inputting into a second machine learning model a set of values based on at least one of the chemical fingerprints, SMILES strings or chemical graph structure data or product name or substance name corresponding to each of the raw materials and the amount of each of the raw materials,wherein the first machine learning model is a model in which parameters are adjusted so that it can predict outputs from inputs by means of a learning data set that takes as inputs at least one of the chemical fingerprints, SMILES strings or chemical graph structure data or product names or substance names corresponding to each of the raw materials and the amount of each of the said raw materials, and as takes as outputs physical property values of the target property names to be predicted, and the second machine learning model is a model in which parameters are adjusted so that it can predict outputs from inputs b
Resumen de: CN120266139A
Systems and methods for predicting group composition of items are disclosed. A system for predicting group composition of items may include a memory storing instructions and at least one processor configured to execute the instructions to perform operations including: receiving entity identification information and a timestamp associated with a transaction without receiving information distinguishing items associated with the transaction; determining a localized machine learning model based on the entity identification information, the localized machine learning model trained to predict a category of an item based on transaction information applied to all of the items associated with the transaction; and applying the localized machine learning model to a model input to generate a predicted category of items associated with the transaction, the model input including the received entity identification information and timestamps but not information distinguishing items associated with the transaction.
Resumen de: EP4645709A1
Provided are a method and an apparatus for performing beam management in a wireless communication system. The method of a terminal may include triggering at least one beam failure recovery (BFR) for a cell that performs beam management using an artificial intelligence and/or machine learning model, deactivating the artificial intelligence and/or machine learning model based on the number of the at least one beam failure recovery triggered during a specific time duration, and transmitting deactivation information of the artificial intelligence and/or machine learning model to a base station. The method of the base station may include transmitting, to the terminal, configuration information related to the artificial intelligence and/or machine learning model, receiving, from the terminal, the deactivation information of the artificial intelligence and/or machine learning model, and, based on the received deactivation information, stopping beam generation related to the artificial intelligence and/or machine learning model.
Resumen de: CN120500694A
Systems and methods for applying a machine learning (ML) model to determine a startup confidence value for a generator are presented herein. The computing system may identify a first plurality of parameters of the first generator. The plurality of parameters may identify operation of the first generator. The computing system may apply the first plurality of parameters to the ML model to determine a first confidence value that identifies a first likelihood that the first generator is started at initiation. The computing system may provide an output based on a first confidence value of the first generator.
Resumen de: US2025336521A1
A cell manufacturing management platform facilitates management of a cell manufacturing process. The cell manufacturing management platform tracks events associated with a cell manufacturing process and coordinates between disparate entities involved in the process. The cell manufacturing management platform utilizes machine learning techniques to generate inferences associated with event scheduling in a manner that optimizes an efficiency metric and reduces likelihood of exceptions occurring. Machine learning models may furthermore be used to generate various alerts or other actions associated with the process. A user interface enables different participating entities to track progress of the process and upcoming events.
Resumen de: US2025335799A1
Methods, systems, and apparatus, including computer-readable media, for multi-pass processing for artificial intelligence chatbots. In some implementations, a system obtains code or instructions generated by one or more artificial intelligence or machine learning (AI/ML) models, where the code or instructions specify criteria to retrieve data from a data source to respond to a prompt from a user. The system determines that the code or instructions specify multiple stages of data processing. The system generates a set of results from the data source based on the generated code or instructions, and obtains a response to the prompt that the one or more AI/ML models generate using at least a portion of the set of results. The system generates an interpretation statement that describes each of the multiple stages of data processing and provides output that includes (i) the response to the prompt and (ii) the generated interpretation statement.
Resumen de: US2025335821A1
Embodiments of this specification describe technologies for task processing. One method includes: in response to receiving a request for a digital assistant, obtaining a processing configuration associated with the digital assistant, the processing configuration comprising one or more inference rules, at least one of the one or more inference rules being configured to perform inference on the request using a corresponding first-type machine learning model; processing the request based on the processing configuration to determine a response of the digital assistant to the request; and in response to a failure to process the request based on the processing configuration, performing inference on the request by invoking a second-type machine learning model to determine a response of the digital assistant to the request, wherein a resource cost of invoking the second-type machine learning model is greater than a resource cost of invoking the first-type machine learning model.
Resumen de: US2025335796A1
Systems and methods include machine learning models operating at different frequencies. An example method includes obtaining images at a threshold frequency from one or more image sensors positioned about a vehicle. Location information associated with objects classified in the images is determined based on the images. The images are analyzed via a first machine learning model at the threshold frequency. For a subset of the images, the first machine learning model uses output information from a second machine learning model, the second machine learning model being performed at less than the threshold frequency.
Resumen de: US2025335331A1
A method comprises causing scanning of at least a portion of code in response to one or more changes to the code, processing data generated as a result of the scanning, and analyzing the data using at least one machine learning algorithm to predict whether the one or more changes will cause a reduction in quality of the code. In response to a prediction that the one or more changes will cause a reduction in the quality of the code, a placeholder in a code development application to address the reduction is generated.
Resumen de: US2025335829A1
A system collects user data describing characteristics of multiple users. A first machine-learning model assesses this data to predict churn scores of the users. When a user sends an error signal concerning their experience with the system, the system retrieves a identified churn score for this user and applies a second machine-learning model. This second model takes as input user data and their churn score to select a corrective action among a set of corrective actions aimed at reducing the user's churn score. After implementing the selected corrective action, the system collects and updates the user's data to reflect their continued engagement or departure. The system uses this updated user data to retrain the first or second model to improve the predictive accuracy of the first or second model.
Resumen de: US2025335326A1
Aspects relate to system and methods for determining a user specific mission operational performance, using machine-learning processes. An exemplary system includes a computing device configured to perform operations including receiving user-input structured data from at least a user device, receiving observed structured data related to the user and a mission performance metric, inputting the user-input structured data and the observed structured data to a machine-learning model, generating a user performance metric as a function of the machine-learning model, receiving a deterministic mission operational performance metric, disaggregating a deterministic user performance metric as a function of the deterministic mission operation performance metric and the mission performance metric, inputting training data to a machine-learning algorithm, where the training data includes the user-input structured data and the observed structured data correlated to the deterministic user performance metric, and training the machine-learning model as a function of the machine-learning algorithm and the training data.
Resumen de: US2025335588A1
Described are examples for detecting anomalies in files. Each of multiple files can be processed to generate corresponding flat files. Values can be extracted from multiple lines of each of the flat files into a parameter vector. Patterns can be generated from the multiple lines based on the values extracted. The parameter vector and the patterns can be provided as input to a machine learning (ML) model to obtain a set of rules for the files based on conditional probabilities. The set of rules can be applied to a set of one or more files to detect anomalies in values in the set of one or more files.
Resumen de: US2025334943A1
An AI-based platform for enabling intelligent orchestration and management of power and energy is provided herein. The AI-based platform includes a digital twin system including a plurality of digital twins of energy operating assets, the plurality of digital twins of energy operating assets including at least one energy generation digital twin, energy storage digital twin, energy delivery digital twin, and/or energy consumption digital twin, and a set of energy simulation systems configured to generate a simulation of energy-related behavior of at least one of the plurality of digital twins of energy operating assets, and a machine-learning system configured to generate a predicted state of at least one of the energy operating assets. The simulation of energy-related behavior is based on historical patterns, current states, and the predicted state of at least one of the energy operating assets.
Resumen de: US2025335160A1
Systems and methods for application modernization using machine learning (ML) are disclosed herein. An example system receives software development information corresponding to one or more applications, the software development information including human-readable code. The system provides the software development information to an ML model. The ML model is trained using application modernization training data corresponding to best practices for modernizing historical applications based upon historical software development information. The ML model includes a large language model trained to interpret the human-readable code. The ML model generates application modernization information corresponding to at least one application of the one or more applications. The application modernization information includes technical requirements of a corresponding application, and application modernization recommendations of the corresponding application based upon the one or more technical requirements. In response to generating the application modernization information, the system provides the application modernization information to a computing device.
Resumen de: WO2025226317A2
Techniques for encrypting data within a 5G Open Radio Access Network (O-RAN) includes receiving, at a first module of the 5G O-RAN, a first set of one or more data packets encrypted using mathematical encryption. The method also includes determining, using a machine-learning model trained to detect cybersecurity threats, the existence of a cybersecurity threat associated with the voice or data transaction, and in response, determining to switch encryption from the mathematical encryption to quantum encryption. The method further includes encrypting the one or more data packets using a quantum encryption key to generate quantum-encrypted data packets, transmitting the quantum encryption key from the first module of the 5G O-RAN core to a second module of the 5G O-RAN over a quantum key distribution (QKD) channel, and transmitting the quantum-encrypted data packets from the first module of the 5G O-RAN to the second module of the 5G O-RAN.
Resumen de: WO2025226533A1
A method may receive, by one or more processors, relevant data from a plurality of data sources. A method may input the relevant data, by the one or more processors into a machine learning model, for generating iterations of object models and iterations of object designs. A method may assess, by the one or more processors utilizing an artificial intelligence module, one or more of object performance metrics, user experience indicators, or industry acceptance probabilities based on user feedback and state pattern. A method may cause, by the one or more processors, iterative refinement of the object models and the object designs.
Resumen de: WO2025226527A1
Systems and methods for application modernization using machine learning (ML) are disclosed herein. An example system receives software development information corresponding to one or more applications, the software development information including human-readable code. The system provides the software development information to an ML model. The ML model is trained using application modernization training data corresponding to best practices for modernizing historical applications based upon historical software development information. The ML model includes a large language model trained to interpret the human-readable code. The ML model generates application modernization information corresponding to at least one application of the one or more applications. The application modernization information includes technical requirements of a corresponding application, and application modernization recommendations of the corresponding application based upon the one or more technical requirements. In response to generating the application modernization information, the system provides the application modernization information to a computing device.
Resumen de: WO2025224675A1
A cell manufacturing management platform facilitates management of a cell manufacturing process. The cell manufacturing management platform tracks events associated with a cell manufacturing process and coordinates between disparate entities involved in the process. The cell manufacturing management platform utilizes machine learning techniques to generate inferences associated with event scheduling in a manner that optimizes an efficiency metric and reduces likelihood of exceptions occurring. Machine learning models may furthermore be used to generate various alerts or other actions associated with the process. A user interface enables different participating entities to track progress of the process and upcoming events.
Resumen de: US2025337742A1
Access to secured items in a computing system is requested instead of being persistent. Access requests may be granted on a just-in-time basis. Anomalous access requests are detected using machine learning models based on historic patterns. Models utilizing conditional probability or collaborative filtering also facilitate the creation of human-understandable explanations of threat assessments. Individual machine learning models are based on historic data of users, peers, cohorts, services, or resources. Models may be weighted, and then aggregated in a subsystem to produce an access request risk score. Scoring principles and conditions utilized in the scoring subsystem may include probabilities, distribution entropies, and data item counts. A feedback loop allows incremental refinement of the subsystem. Anomalous requests that would be automatically approved under a policy may instead face human review, and low threat requests that would have been delayed by human review may instead be approved automatically.
Resumen de: US2025335786A1
Systems, media, and computer-implemented methods are provided for identifying similar chunks of text to tune a text similarity model, such as a text similarity model that is used to find content in response to queries. Using a masked language model, a machine learning model may be tuned on different content from that which the machine learning model was trained. The machine learning model as tuned may be used to determine vector embeddings for terms in chunks of content. Chunks may be matched to each other by finding a term in one chunk having a highest similarity score with a corresponding term in another chunk. Aggregate similarity scores may be determined between the chunks based on the term-to-term similarity scores. If an aggregate similarity score for a pair of chunks satisfies one or more conditions, a text similarity model may be tuned to identify the pair as similar.
Resumen de: WO2025226511A1
Diagnostic laboratory systems provided herein employ a machine learning software model to identify locations of sample containers and empty container slots in different types of sample container carriers. The model training data is based on images of different sample container carrier types each having at least two sample containers and at least one empty container slot. The images are overlaid with an estimated grid of slots based on identified locations of the at least two sample containers in the image and at least one pre-determined grid parameter. Image patches are extracted from the images based on the estimated grid. Each image patch includes a sample container or an empty container slot upon which locations of sample containers and empty container slots can be identified in sample container carriers received in a diagnostic laboratory system. Systems and methods of training a model and operating a diagnostic laboratory system are disclosed.
Resumen de: KR20250155192A
본 개시의 실시 예는, 광액세스망과 머신 러닝 모델을 결합하는 방법에 있어서, 사용자 요구사항을 기반으로 하여 기계 학습(machine learning : ML) 모델을 선택하는 과정; 상기 선택된 ML 모델의 평가 지표값과 미리 설정된 기준값을 비교하는 과정; 및 상기 평가 지표값이 상기 기준값을 초과하는지에 기초하여, 상기 선택된 ML 모델 및 상기 선택된 ML 모델로부터 미세조정된 ML 모델 중 적어도 하나를 아티팩트 스토어에 등록하는 과정을 포함하되, 상기 아티팩트 스토어에 등록된 ML 모델은 상기 사용자의 요구사항에 기초로 하여 가상 수동형 광 네트워크(passive optical network : PON) 및 물리 PON에 결합된다.
Nº publicación: WO2025227051A1 30/10/2025
Solicitante:
GOOGLE LLC [US]
GOOGLE LLC
Resumen de: WO2025227051A1
Provided are a number of different systems and methods that enhance the performance and reliability of sequence processing models, particularly when applied to the processing of medical data and queries. The proposed techniques collectively address the challenges of integrating vast, evolving external knowledge sources, refining responses in the face of uncertainty, and efficiently managing complex, multi-modal datasets such as are commonly found in the medical field. Specifically, by leveraging iterative self-training, uncertainty-guided retrieval, multi-stage prompting, and/or advanced model architectures, the disclosed technology improves how machine learning models can process, analyze, and/or utilize complex information, resulting in improved computer systems applicable to a number of different applications or use cases, including medical diagnostics and/or other medical applications.