Resumen de: US20260044798A1
A case assistant is provided to client support professionals, which utilizes robotic process automation (RPA) technologies to analyze large amounts of data related to historical client cases that are similar to current open cases, data related to skilled experts associated with similar client cases, and data related to business exceptions. Several processes are utilized to provide this data to client support professionals, including a document similarity finder that utilizes a vector data collector, a tokenizer, a stop word remover, a relevance finder, and a similarity finder, several of which utilize a variety of machine learning technologies. Additional processes include a skilled experts finder and a business exceptions finder.
Resumen de: US20260044745A1
Certain aspects of the present disclosure provide techniques and apparatus for machine learning. In an example method, a machine learning model comprising a plurality of layers, and a set of input data for the machine learning model, are accessed. A combination of hyperparameters for the machine learning model is selected based on the set of input data, comprising selecting, for each respective layer of the plurality of layers, a respective cache size based on the input data. The machine learning model is deployed according to the combination of hyperparameters.
Resumen de: US20260044758A1
A system for generating and deploying savant language models that operate in conjunction with a directed acyclic graph. In some cases, a first stage cloud-based system may utilize large language models and domain specific directed acyclic graphs to generate deployable models. The deployable models may include the savant language models and sub-domain directed acyclic graphs that may operate in computational resource restricted environments.
Resumen de: US20260044690A1
Disclosed are various embodiments for automated translations for autonomous chat agents. A build service can send a translation request to a machine translation service, the translation request comprising training data in a first language and the translation request specifying a second language. The build service can then receive translated training data from the machine translation service, the translated training data having been translated from the training data into the second language. Next, the build service can create a translated workflow that comprises a translated machine learning model and a translated intent. Subsequently, the build service can add the translated training data to the translated workflow and train the translated machine learning model using the translated training data.
Resumen de: US20260044803A1
A method can include receiving input data comprising a plurality of features for a plurality of users. A method can including providing the input data to a risk prediction model configured to predict a termination likelihood for each user. In some implementations, the risk prediction model can be a random forest model. A method can include identifying, based on the predicted termination likelihood for each user, an at risk population including users with a termination risk above a threshold amount. A method can include determining, for each user of the at risk population, a profile type of a plurality of profile types. The profile type can describe certain attributes of the user. In some implementations, an end user can select a profile type. A method can include outputting members of the at risk population having the selected profile type.
Resumen de: WO2026035512A1
A network device (PRU, WTRU) may receive a request to collect data for artificial intelligence or machine learning (AI/ML) positioning model training, for example from a network data analytics function (NWDAF) and/or from a model training logical function (MTLF) (450b). The request may include an indication of an area of interest, a time window associated with the data for AI/ML positioning model training, a requested number of data samples of the data for AI/ML positioning model training, and/or a data source type of the data for AI/ML positioning model training. The network device may receive the data for AI/ML positioning model training and/or receive location data associated with the one or more WTRUs. The network device may send the location data and the data for AI/ML positioning model training to the NWDAF or the MTLF (485, 495).
Resumen de: WO2026035375A1
Aspects of the disclosure are directed to a (e.g., capability-based window) configuration for a reference signal receive (RS-Rx) resource-based processing task associated with an artificial intelligence machine learning (AIML) model. In an aspect, the RS-Rx resource-based processing task may be related to sensing or positioning or another task type (e.g., beam management, channel state information (CSI) operations, etc.). In an aspect, the RS-Rx task may be associated with any type of RS-Rx resource relative to the UE (e.g., downlink reference signals, sidelink reference signals, etc.). Such aspects may provide various technical advantages, such as AIML processing window configurations that are configured based on AIML model-specific capabilit(ies) of the UE, which may improve functionalities associated with the AIML model (e.g., improved sensing or positioning or beam management, etc.) and/or improved AIML model monitoring.
Resumen de: WO2026032684A1
Disclosed are devices, methods, apparatuses, and computer readable media for fallback of machine learning functionality An example apparatus for a terminal device may include at least one processor and at least one memory. The at least one memory may store instructions that, when executed by the at least one processor, may cause the apparatus at least to: receive from a network, at least one first configuration for a machine learning functionality of a determined network function, and a second configuration for a non-machine learning functionality of the determined network function, wherein the second configuration is a fallback configuration from the first configuration; receive from the network, a first indication indicating the terminal device to activate fallback from the machine learning functionality; and in response to the first indication, apply modifications to the first configuration for use during fallback, and enable the second configuration in the network function.
Resumen de: WO2026033326A1
An apparatus including at least one processor; and at least one memory storing instructions that, when executed by the at least one processor, cause the apparatus at least to: transmit, to a network entity, a configuration used when at least one model is trained; wherein the at least one model is an artificial intelligence or machine learning model; and receive, from the network entity, information related to a consistency between the configuration used when the at least one model is trained and a configuration used when the at least one model is to be applied during inference.
Resumen de: WO2026035335A1
Certain aspects of the present disclosure provide techniques and apparatus for machine learning. In an example method, a machine learning model comprising a plurality of layers, and a set of input data for the machine learning model, are accessed. A combination of hyperparameters for the machine learning model is selected based on the set of input data, comprising selecting, for each respective layer of the plurality of layers, a respective cache size based on the input data. The machine learning model is deployed according to the combination of hyperparameters.
Resumen de: WO2026035326A1
The disclosed concepts relate to training a machine learning model to provide help sessions during a video game. For instance, prior video game data from help sessions provided by human users can be filtered to obtain training data. Then, a machine learning model can be trained using approaches such as imitation learning, reinforcement learning, and/or tuning of a generative model to perform help sessions. Then, the trained machine learning model can be employed at inference time to provide help sessions to video game players.
Resumen de: EP4694428A1
A method performed by a first device in a wireless communication system, according to at least one embodiment among the embodiments disclosed in the present specification, comprises: receiving, from a second device, one or two or more data sets related to positioning; training an artificial intelligence/machine learning (AI/ML) model on the basis of at least a portion of the one or two or more data sets; and acquiring positioning information outputted from the trained AI/ML model, wherein data label-related information is given to each of the received one or two or more data sets, and the data label-related information may include positioning-related actual measurement information and information related to the quality of the actual measurement information.
Resumen de: CN120898407A
Embodiments of the present disclosure provide machine learning model feature selection in a communication network. The method includes, in response to a feature selection trigger of a first machine learning model, determining a target input feature set for an analysis task based on contextual information related to the analysis task, the first machine learning model being currently provisioned for performing the analysis task based on a current input feature set, the current input feature set is different from the target input feature set; and causing a second machine learning model to be provisioned to perform an analysis task based on the determined set of target input features. In this manner, the machine learning model may be supplied with an optimized set of input features that is applicable to the current network context and provides an acceptable level of model performance.
Resumen de: EP4693046A1
Systems, computer program products, and methods are described for resource allocation in a hybrid distributed computational environment. An example system segments a received task into multiple sub-tasks. Upon partitioning the task, each sub-task is assigned to the appropriate computational resource (e.g., CPU, GPU, or QPU), enabling parallel execution of multiple sub-tasks. Both task partitioning and computational resource determination is determined using a machine learning model. Additionally, the machine learning model may continuously monitor the execution of each sub-task by receiving resource utilization information and performance metrics associated with the execution of each sub-task. The resource utilization information and performance metrics may then be used to update the machine learning model.
Resumen de: EP4693123A1
A biomass utilization support device: acquires biomass information relating to a biobased material and product information for each of a plurality of products including information about materials configuring the products; uses a machine learning model, which has been trained to estimate appropriate values for replacement amounts in a case of replacing a portion of the materials configuring the products with the biobased material, and the acquired biomass information and product information to estimate the appropriate values for each of the plurality of products; calculates, for each of the plurality of products, environmental impact indicators in a case in which a portion of the materials configuring the products has been replaced with the biobased material at the replacement amounts represented by the estimated appropriate values; and outputs support information listing the estimated appropriate values and the calculated environmental impact indicators.
Resumen de: WO2024211680A1
A device, a method, a system and one or more computer-readable media. A first example device is to host a management service (MnS) producer for a wireless cellular network. One or more processors of the first device are to receive, from an MnS consumer, a request to perform AI/ML emulation in one or more available machine learning (ML) emulation environments; and send to the MnS consumer one or more instances of an information object class (IOC) associated with the process of the AI/ML emulation. A second example device is to host an MnS consumer. One or more processors of the second device are to send, to an MnS producer, a request to perform AI/ML emulation in one or more available machine learning (ML) emulation environments; and receive, from the MnS producer one or more instances of an information object class (IOC) associated with the process of the AI/ML emulation.
Nº publicación: EP4693331A1 11/02/2026
Solicitante:
NEC SOLUTION INNOVATORS LTD [JP]
NEC Solution Innovators, Ltd
Resumen de: EP4693331A1
This learning model generation device 10 is equipped with a learning model generation unit 11 which, when a function expressing a change in an inspection value obtained by inspecting a person is set, generates a learning model in which the inspection value is the explanatory variable and the parameter is the objective variable, by performing machine learning using inspection values of sample people and parameters of the function for the sample people as training data.