Resumen de: WO2026081162A1
An MPPT photovoltaic power optimization method based on photovoltaic modules, and a system, which relate to the technical field of photovoltaic power optimization. The method comprises: collecting photovoltaic signals, and analyzing the collected signals; on the basis of a deep learning algorithm, learning historical data and real-time environmental data to predict future illumination and temperature change trends, and generating a corresponding power point prediction model; using an edge computing device to locally process the data and implement control, and adjusting the output power of a photovoltaic module in real time on the basis of a generated MPPT control strategy; and by means of a reinforcement learning algorithm, optimizing an installation layout of photovoltaic modules. By means of introducing a deep learning model combined with dynamic step size adjustment and perturb-and-observe strategies, accurate and efficient tracking of a maximum power point for a photovoltaic system in a complex environment is realized, thus effectively improving the stability and response speed of power output. Power fluctuations and losses are reduced, thereby significantly improving the overall energy efficiency of a photovoltaic power generation system.
Resumen de: US20260111787A1
Machine learning models trained using personal data are automatically retrained upon deletion of the personal data. A system identifies a first data set including personal data and used to train a machine learning model. The system deletes the personal data from a data store associated with the machine learning model. The system also automatically retrains, based on deleting the personal data, the machine learning model using a second data set that excludes the personal data.
Resumen de: US20260111727A1
Data is the “fuel” that powers the machine learning “engine” for Artificial Intelligence. However, identifying high quality data that can catalyze smarter AI, AGI, and SuperIntelligent systems is becoming an increasingly challenging bottleneck for machine learning. This invention not only describes novel methods for identifying the most valuable data, but it also presents an entirely new framework for understanding the information content of AI-relevant datasets. The methods can be used by intelligent systems autonomously or in collaboration with humans. Novel methods for accelerating AI learning, and for updating the knowledge of AI systems in real-time, are also disclosed. Consistent with the view that human survival may depend on the fastest path to AGI also being the safest path, the invention describes catalysts which help maximize alignment between the values of AGI and humans. These innovative catalysts increase not only the intelligence, but also the safety, of AI systems.
Resumen de: WO2026085086A1
Provided are methods for an explainable deep learning system. Some methods include encoding a state and a trajectory of an autonomous vehicle into features and generating concept predictions based on the features. An explanation is generated based on the concept predictions. Systems and computer program products are also provided.
Resumen de: US20260110609A1
0000 The present disclosure relates to stress-strain prediction method for gap-graded soils based on coupling of the discrete element method and machine learning. The method includes the following steps: S1. obtaining baseline stress-strain data; S2. establishing and verifying a discrete element model; S3. establishing discrete element specimens of gap-graded soils with different gradations; S4. obtaining stress-strain data of the different gap-graded soils; S5. establishing a raw database; S6. partition the raw database; S7. establishing a random forest model; S8. training the random forest model to obtain a stress prediction model; S9. evaluating the trained random forest model; S10. predicting stress data of the gap-graded soils. This method can quickly predict the stress-strain curve of a gap-graded soil directly with its particle size ratio and fines content, improving efficiency.
Resumen de: EP4730710A1
0001 Systems and methods are disclosed for responding to a data incident. One or more processors receive one or more indicators of a data leak occurring at one or more nodes of a network. One or more processors causes an identification, by a machine-learning model, of one or more compromised nodes within the network based on the one or more indicators of a data leak. One or more processors may receive from the machine-learning model the identification of the one or more compromised nodes. One or more processors modify access permissions at one or more identified compromised nodes based on a user permission schema or pre-determined access rules, in response to the data leak. One or more processors cause a generation of a notification regarding the data leak and the modifications of access permissions to one or more users associated with the network.
Resumen de: WO2024255997A1
A data processing apparatus (10) for enhancing, in particular optimizing a machine learning, ML, model for parallelized operation on a plurality of processing devices (20) is disclosed, wherein each processing device (20) comprises a plurality of processing elements, PEs, (21a- d) configured to perform one or more of a plurality of ML model tasks. The data processing apparatus (10) is configured to generate based on the ML model a computational graph, CG, representation (30) of the ML model, wherein the CG representation (30) comprises a plurality of nodes (31) and a plurality of edges (32), wherein each of the plurality of nodes (31) is associated with one or more of the plurality of ML model tasks and wherein the plurality of edges (32) define a plurality of dependencies of the plurality of nodes (32) of the CG representation (30). Furthermore, the data processing apparatus (10) is configured to generate an enhanced CG representation of the ML model by adding to the CG representation (30) of the ML model one or more further dependencies of the plurality of nodes (31) of the CG representation (30) of the ML model. The data processing apparatus (10) is further configured to compile the enhanced CG representation of the ML model for generating code for each of the plurality of processing devices (20).
Resumen de: WO2024258464A1
Techniques for generating synthetic data for machine learning (ML) models are described. A system includes a language model that processes a task and a corresponding set of example inputs to generate another input, referred to herein as a machine-generated data. The machine-generated data is processed using a ML model (that data is being generated for) to determine a model output, and the model output is analyzed to determine whether it corresponds to a target output. If the model output corresponds to the target output, then the machine-generated data is added to the set of example inputs and one of the original example inputs is removed to generate an updated set of example inputs. The updated set can be used for various training techniques.
Resumen de: EP4730132A1
0001 A received query is parsed by a first machine learning (ML) model to retrieve one or more semantic entities. Next, a second ML model is trained with a list of application programming interfaces (APIs) and corresponding first data to generate a second trained ML model which receives the one or more semantic entities as inputs. Based on these inputs, the second trained ML model selects a given API from the list of APIs. Then, a third ML model is trained with the given API and corresponding second metadata to generate a third trained ML model. Next, the third trained ML model receives the one or more semantic entities as inputs and generates a given API call for invoking the given API. Then, the given API call is executed and the query is completed to a first computing system based on invoking the given API.
Resumen de: US12592860B2
Embodiments relate to analyzing network packets in a telecommunication networks using machine learning models. The network packets are correlated and then labeled to indicate successes or failures in a subtask of communication flow. Features are extracted based on the labels and correlated network packets. The extracted features are applied to a machine learning model to predict or infer success or failure of the entire communication flow. The result from the machine learning model may again be applied to subsequent machine learning models to predict root cause of a failure or to predict or infer the type of success. In this way, more accurate diagnosis of network issues in the telecommunication networks may be made in a more expedient manner.
Resumen de: BE1032955A1
Die Erfindung betrifft ein computerimplementiertes Verfahren zum Suchen von Datenbankobjekten in einer Datenbank 50, umfassend die Schritte: Empfangen (S1), durch eine Eingangsschnittstelle (10), von Objektdaten (DO) eines Suchobjekts. Bestimmen (S2), durch ein Machine learning, ML,- Kodierungsmodul (30), einer vektoriellen Objekt-Kodierung des Suchobjekts unter Verwendung der Objektdaten (DO), wobei die vektorielle Kodierung mindestens einen Merkmalsvektor (VO) umfasst. Bestimmen (S3), durch ein Suchmodul (40), einer Ähnlichkeit des mindestens einen Merkmalsvektors (VO) zu Merkmalsvektoren der Datenbankobjekte (OD). Bestimmen (S4), durch das Suchmodul (40), eines Suchergebnisses (E) aus Datenbankobjekten (OD) abhängig von der bestimmten Ähnlichkeit.
Resumen de: WO2026079733A1
This method of a terminal may comprise the steps of: identifying, from inference result reporting for a plurality of time instances, a candidate set related to the number of beams to be included in a reporting unit for differential reporting; receiving, from a base station, information indicating a first number belonging to the candidate set; and transmitting, to the base station, an inference result report including the reporting unit that includes inference results for the number of beams corresponding to the first number.
Resumen de: WO2026079735A1
A method for a user equipment comprises the steps of: receiving, from a base station, configuration information about the number Nt (where Nt is a natural number of 1 or more) of a plurality of time instances; receiving, from the base station, configuration information about the number Nb (where Nb is a natural number of 1 or more) of beams to be reported for each of the plurality of time instances; generating an inference result report comprising inference results for Nt*Nb beams; and transmitting the inference result report to the base station, wherein the inference result report comprises one or more reporting units, each of the one or more reporting units comprises inference results for K (where K is a natural number of 1 or more) or less beams, the number of the one or more reporting units is M*Nt, and M may be defined as M=ceiling (Nb/K).
Resumen de: WO2026078301A1
Example embodiments of the present disclosure provide a solution for a performance test caused by a functionality change. In an example method, a terminal device determines a change in an artificial intelligence / machine learning model of a functionality of the terminal device that is connected to a radio access network. Then, the terminal device transmits a functionality applicability report for the model of the functionality, wherein the functionality applicability report includes a reason of a change in the functionality, an applicability indication for the functionality and model status of the AI/ML model. Next, the terminal device receives a configuration message indicating at least one test configuration for a performance testing procedure for the AI/ML model of the functionality, wherein the at least one test configuration includes at least one procedure parameter for the performance testing procedure determined based on the reason, the applicability indication and the model status.
Resumen de: WO2026080665A1
Some non-limiting aspects of the present disclosure describes generating dataset for implementing a rules-driven query system on a machine learning model. With this method, the system can interchange data easily since little modifications would be needed to shift this approach to using any other information. In this way, the present disclosure describes reducing bottlenecks for sales personnel and technicians to access parts data to complete quotations and service.
Resumen de: US20260106884A1
First event data, indicative of a first activity on a computer network and second event data indicative of a second activity on the computer network, is received. A first machine learning anomaly detection model is applied to the first event data, by a real-time analysis engine operated by the threat indicator detection system in real time, to detect first anomaly data. A second machine learning anomaly detection model is applied to the first anomaly data and the second event data, by a batch analysis engine operated by the threat indicator detection system in a batch mode, to detect second anomaly data. A third anomaly is detected using an anomaly detection rule. The threat indictor system processes the first anomaly data, the second anomaly data, and the third anomaly data using a threat indicator model to identify a threat indicator associated with a potential security threat to the computer network.
Resumen de: US20260105304A1
In various examples, a generative model is used to synthesize datasets for use in training a downstream machine learning model to perform an associated task. The synthesized datasets may be generated by sampling a scene graph from a scene grammar—such as a probabilistic grammar—and applying the scene graph to the generative model to compute updated scene graphs more representative of object attribute distributions of real-world datasets. The downstream machine learning model may be validated against a real-world validation dataset, and the performance of the model on the real-world validation dataset may be used as an additional factor in further training or fine-tuning the generative model for generating the synthesized datasets specific to the task of the downstream machine learning model.
Resumen de: US20260105354A1
Embodiments of the present disclosure include techniques for automatically generating machine learning models. In one embodiment, sets of hyperparameters corresponding to machine learning models trained on one training data set are provided as an input. The hyperparameters are iteratively selected using an algorithm, such as a bandit algorithm, and used to train an ML model using another training data set. The performance of the trained ML model is evaluated on each iteration until the ML model performance is above a threshold. The resulting model may be used to train a resulting model. In some embodiments, ML models are combined across iterations to improve performance.
Resumen de: US20260105374A1
The disclosed technology can include a system capable of receiving a corpus of documents including a first subset of documents and a second subset of documents, where the first subset of documents and the second subset of documents are received at different time intervals, generating a credibility score and an impact score for each document of the first subset of documents, selecting a training subset from the first subset of documents based on the credibility score and the impact score, training a machine learning algorithm based on the training subset, generating, using the machine learning algorithm, a plurality of hypotheses, and evaluating the plurality of hypotheses against the second subset of documents.
Resumen de: US20260105373A1
Method comprising determining, in a trusted execution environment, values of hyperparameters of a machine learning model based on private data stored in the trusted execution environment, wherein the hyperparameters include system-specific hyperparameters and model-specific hyperparameters; training, in the trusted execution environment, the machine learning model to which the determined values of the system-specific and model-specific hyperparameters are applied to obtain, after one or more epochs of training, a sufficiently trained machine learning model; outputting the sufficiently trained machine learning model from the trusted execution environment; and, inhibiting output of the determined values of the system-specific hyperparameters from the trusted execution environment, wherein the system-specific hyperparameters are not accessible in the outputted sufficiently trained machine learning model.
Resumen de: US20260105359A1
A trained machine learning model identifies that a real-world apparatus has a failed component, which trained machine learning model has been trained with a training corpus that includes content generated by synthesizing a plurality of synthesized operating examples for a given apparatus, wherein at least some of the plurality of synthesized operating examples are generated via a simulation modeling environment that receives as input characterizing information that corresponds to any of a variety of failure states for a component of the given apparatus.
Resumen de: US20260105383A1
An approach is provided for prediction-guided machine learning model ensembling. Label groupings within an output range of a base machine learning model are determined. Accuracies of the base model across the label groupings are evaluated. One or more of the label groupings are identified whose respective evaluated accuracy does not satisfy end user-defined criteria. Using a reduced training set, a specialized machine learning model for a given label grouping included in the identified one or more label groupings is trained. A majority of samples of the reduced training set are from the given label grouping. During inference, it is determined that an initial prediction by the base model is within an output range specified by the given label grouping. A weighting for an ensembling using the base model and the specialized model is determined. Using the ensembling, the initial prediction is refined to generate a final prediction.
Resumen de: US20260105369A1
A method for searching for an output machine learning (ML) algorithm to perform an ML task is described. The method comprising: receiving data specifying an input ML algorithm; receiving data specifying a search algorithm that searches for candidate ML algorithms and an evaluation function that evaluates the performance of candidate ML algorithms; generating data representing a symbolic tree from the input ML algorithm; generating data representing a hyper symbolic tree from the symbolic tree; searching an algorithm search space that defines a set of possible concrete symbolic trees from the hyper symbolic tree for candidate ML algorithms and training the candidate ML algorithms to determine a respective performance metric for each candidate ML algorithm; and selecting one or more trained candidate ML algorithms among the trained candidate ML algorithms based on the determined performance metrics.
Resumen de: US20260105329A1
Systems and methods for event outcome validation are provided. The system receives a user input indicative of an event and at least one anticipated outcome of the event to be wagered on by the user. The system receives confirmation data associated with an outcome of the event from at least one confirmation data source confirming the outcome of the event and classifies the confirmation data utilizing at least one machine learning algorithm. The system determines a threshold of confirmation data sources to validate the outcome of the event and utilizes the at least one machine learning algorithm to determine a reduced threshold of confirmation data sources to validate the outcome of the event based on at least one of the classified confirmation data and a confirmation rating of the at least one confirmation data source. The system validates the outcome of the event based on the reduced threshold.
Nº publicación: US20260105259A1 16/04/2026
Solicitante:
CHARLEE AI INC [US]
Resumen de: US20260105259A1
A computerized method for extracting domain specific insights from a corpus of files containing large documents comprising: breaking down large chunks of text into smaller sentences/short paragraphs in a domain specific way, identifying and removing domain noise; identifying the sentence intents of the non-noise sentences; tagging the sentences with other domain specific attributes; defining a semantic ontology using a graph database based on the sentence intents, a multitude of mini-dictionaries and domain attributes; applying a pre-defined ontology to tag documents with domain specific hashtags; and combining the hashtags using machine learning techniques into insights.