REDES NEURONALES

VolverVolver

Resultados 235 resultados LastUpdate Última actualización 17/06/2019 [19:55:00] pdf PDF

Solicitudes publicadas en los últimos 60 días / Applications published in the last 60 days

Página1 de 10 nextPage   por página


AUTOMATIC EXTRACTION OF ATTRIBUTES OF AN OBJECT WITHIN A SET OF DIGITAL IMAGES

NºPublicación: WO2019110914A1 13/06/2019

Solicitante:

BULL SAS [FR]

FR_3074594_A1

Resumen de: WO2019110914A1

The invention concerns a method for recognising objects of a predefined type among a set of types, within a set of digital images, comprising - detecting (11) an object of this predefined type within a digital image (10) from said set, and determining (12) an area (13) of said image encompassing the detected object; - generating (14) a signature (15), by a convolutional neural network, from this area, allowing the unambiguous identification of the object; - determining (16) a set of attributes (17) from the signature; - storing (18), in a database (19), a record relative to said object associating the signature with the set of attributes; in which the neural network is trained on a learning set consisting of a first set formed from objects associated with a set of attributes and a second set formed from objects not associated with a set of attributes.

traducir

INFERENCE SERVER AND ENVIRONMENT CONTROLLER FOR INFERRING VIA A NEURAL NETWORK ONE OR MORE COMMANDS FOR CONTROLLING AN APPLIANCE

NºPublicación: US2019179268A1 13/06/2019

Solicitante:

DISTECH CONTROLS INC [CA]

Resumen de: US2019179268A1

Inference server and environment controller for inferring one or more commands for controlling an appliance. The environment controller receives at least one environmental characteristic value (for example, at least one of a current temperature, current humidity level, current carbon dioxide level, and current room occupancy) and at least one set point (for example, at least one of a target temperature, target humidity level, and target carbon dioxide level); and forwards them to the inference server. The inference server executes a neural network inference engine using a predictive model (generated by a neural network training engine) for inferring the one or more commands based on the received at least one environmental characteristic value and the received at least one set point; and transmits the one or more commands to the environment controller. The environment controller forwards the one or more commands to the controlled appliance.

traducir

ARTIFICIAL NEURAL NETWORK

NºPublicación: US2019180148A1 13/06/2019

Solicitante:

NOKIA TECHNOLOGIES OY [FI]

CN_109564633_A

Resumen de: US2019180148A1

According to an example aspect of the present invention, there is provided an apparatus comprising memory configured to store data defining, at least partly, an artificial neural network, and at least one processing core configured to train the artificial neural network by applying a test dataset to the artificial neural network with at least one stochastic rectified linear unit, the at least one stochastic rectified linear unit being configured to produce a positive output from a positive input by multiplying the input with a stochastically selected value.

traducir

System and method of classifying an action or event

NºPublicación: US2019180149A1 13/06/2019

Solicitante:

CANON KK [JP]

Resumen de: US2019180149A1

A method of classifying an action or event using an artificial neural network. The method comprises obtaining a first and a second plurality of feature responses, corresponding to point data in a first channel and a second channel respectively. Each of the first and second plurality of feature responses have associated temporal and spatial position values, the first and second plurality of feature responses relating to a plurality of objects. The method also comprises generating a third plurality of feature responses based on one of the first plurality of feature responses and one of the second plurality of feature responses, and a weighted combination of associated temporal and spatial position values of the corresponding one of the first and second plurality of feature responses; and classifying an action or event relating to the objects using the artificial neural network based on the third plurality of feature responses.

traducir

SYSTEM FOR REAL-TIME OBJECT DETECTION AND RECOGNITION USING BOTH IMAGE AND SIZE FEATURES

NºPublicación: US2019180119A1 13/06/2019

Solicitante:

HRL LAB LLC [US]

Resumen de: US2019180119A1

Described is an object recognition system. Using an integral channel features (ICF) detector, the system extracts a candidate target region (having an associated original confidence score representing a candidate object) from an input image of a scene surrounding a platform. A modified confidence score is generated based on a location and height of detection of the candidate object. The candidate target regions are classified based on the modified confidence score using a trained convolutional neural network (CNN) classifier, resulting in classified objects. The classified objects are tracked using a multi-target tracker for final classification of each classified object as a target or non-target. If the classified object is a target, a device can be controlled based on the target.

traducir

THREE-DIMENSIONAL POINT CLOUD TRACKING APPARATUS AND METHOD USING RECURRENT NEURAL NETWORK

NºPublicación: US2019179021A1 13/06/2019

Solicitante:

INST INFORMATION IND [TW]

Resumen de: US2019179021A1

The embodiments of the present invention provide a three-dimensional point cloud tracking apparatus and method using a recurrent neural network. The three-dimensional point cloud tracking apparatus and method can track the three-dimensional point cloud of the entire environment and model the entire environment by using a recurrent neural network model. Therefore, the three-dimensional point cloud tracking apparatus and method can be used to reconstruct the three-dimensional point cloud of the entire environment at the current moment and also can be used to predict the three-dimensional point cloud of the entire environment at a later moment.

traducir

INFERENCE SERVER AND ENVIRONMENT CONTROLLER FOR INFERRING ONE OR MORE COMMANDS FOR CONTROLLING AN APPLIANCE TAKING INTO ACCOUNT ROOM CHARACTERISTICS

NºPublicación: US2019179270A1 13/06/2019

Solicitante:

DISTECH CONTROLS INC [CA]

Resumen de: US2019179270A1

Inference server and environment controller for inferring via a neural network one or more commands for controlling an appliance. The environment controller determines at least one room characteristic. The environment controller receives at least one environmental characteristic value and at least one set point. The environment controller transmits the at least one environmental characteristic, set point and room characteristic to the inference server. The inference server executes a neural network inference engine using a predictive model (generated by a neural network training engine) for inferring the one or more commands for controlling the appliance. The inference is based on the received at least one environmental characteristic value, at least one set point and at least one room characteristic. The inference server transmits the one or more commands to the environment controller, which forwards the one or more commands to the controlled appliance.

traducir

ENVIRONMENT CONTROLLER AND METHOD FOR INFERRING VIA A NEURAL NETWORK ONE OR MORE COMMANDS FOR CONTROLLING AN APPLIANCE

NºPublicación: US2019179269A1 13/06/2019

Solicitante:

DISTECH CONTROLS INC [CA]

Resumen de: US2019179269A1

Method and environment controller for inferring via a neural network one or more commands for controlling an appliance. A predictive model generated by a neural network training engine is stored by the environment controller. The environment controller receives at least one environmental characteristic value (for example, at least one of a current temperature, current humidity level, current carbon dioxide level, and current room occupancy). The environment controller receives at least one set point (for example, at least one of a target temperature, target humidity level, and target carbon dioxide level). The environment controller executes a neural network inference engine, which uses the predictive model for inferring the one or more commands for controlling the appliance based on the at least one environmental characteristic value and the at least one set point. The environment controller transmits the one or more commands to the controlled appliance.

traducir

TRAINING MULTIPLE NEURAL NETWORKS OF A VEHICLE PERCEPTION COMPONENT BASED ON SENSOR SETTINGS

NºPublicación: US2019176841A1 13/06/2019

Solicitante:

LUMINAR TECH INC [US]

Resumen de: US2019176841A1

A method for controlling a vehicle based on sensor data having variable sensor parameter settings includes receiving sensor data generated by a vehicle sensor while the sensor is configured with a first sensor parameter setting. The method also includes receiving an indicator specifying the first sensor parameter setting, and selecting, based on the received indicator, one of a plurality of neural networks of a perception component, each neural network having been trained using training data corresponding to a different sensor parameter setting. The method also includes generating signals descriptive of a current state of the environment using the selected neural network and based on the received sensor data. The method further includes generating driving decisions based on the signals descriptive of the current state of the environment, and causing one or more operational subsystems of the vehicle to maneuver the vehicle in accordance with the generated driving decisions.

traducir

DOMAIN SEPARATION NEURAL NETWORKS

NºPublicación: US2019180136A1 13/06/2019

Solicitante:

GOGLE LLC [US]

CN_109643383_A

Resumen de: US2019180136A1

Methods, systems, and apparatus, including computer programs encoded on computer storage media, for processing images using an image processing neural network system. One of the system includes a shared encoder neural network implemented by one or more computers, wherein the shared encoder neural network is configured to: receive an input image from a target domain; and process the input image to generate a shared feature representation of features of the input image that are shared between images from the target domain and images from a source domain different from the target domain; and a classifier neural network implemented by the one or more computers, wherein the classifier neural network is configured to: receive the shared feature representation; and process the shared feature representation to generate a network output for the input image that characterizes the input image.

traducir

ON-CHIP COMPUTATIONAL NETWORK

NºPublicación: US2019180183A1 13/06/2019

Solicitante:

AMAZON TECH INC [US]

Resumen de: US2019180183A1

Provided are systems, methods, and integrated circuits for neural network processing. In various implementations, an integrated circuit for neural network processing can include a plurality of memory banks storing weight values for a neural network. The memory banks can be on the same chip as an array of processing engines. Upon receiving input data, the circuit can be configured to use the set of weight values to perform a task defined for the neural network. Performing the task can include reading weight values from the memory banks, inputting the weight values into the array of processing engines, and computing a result using the array of processing engines, where the result corresponds to an outcome of performing the task.

traducir

APPARATUS AND METHOD FOR EXTRACTING SOUND SOURCE FROM MULTI-CHANNEL AUDIO SIGNAL

NºPublicación: US2019180142A1 13/06/2019

Solicitante:

ELECTRONICS & TELECOMMUNICATIONS RES INST [KR]

Resumen de: US2019180142A1

Disclosed is an apparatus and method for extracting a sound source from a multi-channel audio signal. A sound source extracting method includes transforming a multi-channel audio signal into two-dimensional (2D) data, extracting a plurality of feature maps by inputting the 2D data into a convolutional neural network (CNN) including at least one layer, and extracting a sound source from the multi-channel audio signal using the feature maps.

traducir

DANGER RANKING USING END TO END DEEP NEURAL NETWORK

NºPublicación: US2019180144A1 13/06/2019

Solicitante:

IMRA EUROPE S A S [FR]

Resumen de: US2019180144A1

A danger ranking training method comprising training a first deep neural network for generic object recognition within generic images, training a second deep neural network for specific object recognition within images of a specific application, training a third deep neural network for specific scene flow prediction within image sequences of the application, training a fourth deep neural network for potential danger areas localization within images or image sequences of the application using at least one human trained danger tagging method, training a fifth deep neural network for non-visible specific object anticipation and/or visible specific object prediction within image or image sequences of the application, and determining at least one danger pixel within an image or an image sequence of the application using an end-to-end deep neural network as a sequence of transfer learning of the five deep neural networks followed by one or several end-to-end top layers.

traducir

Image Recognition Method and Terminal

NºPublicación: US2019180101A1 13/06/2019

Solicitante:

HUAWEI TECH CO LTD [CN]

EP_3486863_A1

Resumen de: US2019180101A1

An image recognition method and a terminal, where the method includes obtaining, by the terminal, an image file including a target object recognizing, by the terminal, the target object based on an image recognition model in the terminal using a neural network computation apparatus in the terminal to obtain object category information of the target object, and storing, by the terminal, the object category information in the image file as first label information of the target object. Hence, image recognition efficiency of the terminal can be improved, and privacy of a terminal user can be effectively protected.

traducir

ARTIFICIAL NEURAL NETWORK FOR LANE FEATURE CLASSIFICATION AND LOCALIZATION

NºPublicación: US2019180115A1 13/06/2019

Solicitante:

GM GLOBAL TECH OPERATIONS LLC [US]

DE_102018131477_A1

Resumen de: US2019180115A1

Systems and method are provided for controlling a vehicle. In one embodiment, a method of controlling a vehicle, includes receiving, via at least one processor, image data from each of plural cameras mounted on the vehicle. The method includes assembling, via the at least one processor, the image data from each of the plural cameras to form assembled image data. The method includes classifying and localizing lane features using an artificial neural network based on the assembled image data, to produce classified and localized lane data. The method includes performing, via the at least one processor, a data fusion process based on the classified and localized lane data, thereby producing fused lane feature data. The method includes controlling, via the at least one processor, the vehicle based, in part, on the fused lane feature data.

traducir

PATH PREDICTION FOR A VEHICLE

NºPublicación: US2019179328A1 13/06/2019

Solicitante:

VOLVO CAR CORP [SE]

Resumen de: US2019179328A1

A method and system for predicting a near future path for a vehicle. For predicting the near future path sensor data and vehicle driving data is collected. Road data is collected indicative of a roadway on the presently occupied road for the vehicle. The sensor data and the vehicle driving data is pre-processed to provide object data comprising a time series of previous positions, headings, and velocities of each of the objects relative the vehicle. The object data, the vehicle driving data, and the road data is processed in a deep neural network to predict the near future path for the vehicle. The invention also relates to a vehicle comprising the system.

traducir

USING AUTOENCODERS FOR TRAINING NATURAL LANGUAGE TEXT CLASSIFIERS

NºPublicación: US2019179896A1 13/06/2019

Solicitante:

ABBYY DEV LLC [RU]

RU_2678716_C1

Resumen de: US2019179896A1

Systems and methods for using autoencoders for training natural language classifiers. An example method comprises: producing, by a computer system, a plurality of feature vectors, wherein each feature vector represents a natural language text of a text corpus, wherein the text corpus comprises a first plurality of annotated natural language texts and a second plurality of un-annotated natural language texts; training, using the plurality of feature vectors, an autoencoder represented by an artificial neural network; producing, by the autoencoder, an output of the hidden layer, by processing a training data set comprising the first plurality of annotated natural language texts; and training, using the training data set, a text classifier that accepts an input vector comprising the output of the hidden layer and yields a degree of association, with a certain text category, of a natural language text utilized to produce the output of the hidden layer.

traducir

METHOD AND APPARATUS FOR PROCESSING CONVOLUTION OPERATION IN NEURAL NETWORK

NºPublicación: EP3496008A1 12/06/2019

Solicitante:

SAMSUNG ELECTRONICS CO LTD [KR]

US_2019171930_A1

Resumen de: US2019171930A1

Provided are a method and apparatus for processing a convolution operation in a neural network, the method includes determining operands from input feature maps and kernels, on which a convolution operation is to be performed, dispatching operand pairs combined from the determined operands to multipliers in a convolution operator, generating outputs by performing addition and accumulation operations with respect to results of multiplication operations, and obtaining pixel values of output feature maps corresponding to a result of the convolution operation based on the generated outputs.

traducir

DANGER RANKING USING END TO END DEEP NEURAL NETWORK

NºPublicación: EP3495992A1 12/06/2019

Solicitante:

IMRA EUROPE SAS [FR]

Resumen de: US2019180144A1

A danger ranking training method comprising training a first deep neural network for generic object recognition within generic images, training a second deep neural network for specific object recognition within images of a specific application, training a third deep neural network for specific scene flow prediction within image sequences of the application, training a fourth deep neural network for potential danger areas localization within images or image sequences of the application using at least one human trained danger tagging method, training a fifth deep neural network for non-visible specific object anticipation and/or visible specific object prediction within image or image sequences of the application, and determining at least one danger pixel within an image or an image sequence of the application using an end-to-end deep neural network as a sequence of transfer learning of the five deep neural networks followed by one or several end-to-end top layers.

traducir

Machine Learning Training Set Generation

NºPublicación: US2019169962A1 06/06/2019

Solicitante:

SCHLUMBERGER TECHNOLOGY CORP [US]

CA_3033397_A1

Resumen de: US2019169962A1

Systems, computer-readable media, and methods for generating machine learning training data by obtaining reservoir data, determining subsections of the reservoir data, labeling the subsections of the reservoir data to generate labeled reservoir data, and feeding the labeled reservoir data into an artificial neural network. The reservoir data can be labeled using analysis data or based on interpretive input from an interpreter.

traducir

SYSTEM AND METHOD FOR GENERATING A CONFIDENCE VALUE FOR AT LEAST ONE STATE IN THE INTERIOR OF A VEHICLE

NºPublicación: US2019171892A1 06/06/2019

Solicitante:

APTIV TECH LIMITED [BB]

Resumen de: US2019171892A1

A system for generating a confidence value for at least one state in the interior of a vehicle, comprising an imaging unit configured to capture at least one image of the interior of the vehicle, and a processing unit comprising a convolutional neural network, wherein the processing unit is configured to receive the at least one image from the imaging unit and to input the at least one image into the convolution-al neural network, wherein the convolutional neural network is configured to generate a respective likelihood value for each of a plurality of states in the interior of the vehicle with the likelihood value for a respective state indicating the likelihood that the respective state is present in the interior of the vehicle, and wherein the processing unit is further configured to generate a confidence value for at least one of the plurality of states in the interior of the vehicle from the likelihood values generated by the convolutional neural network.

traducir

Image Transformation with a Hybrid Autoencoder and Generative Adversarial Network Machine Learning Architecture

NºPublicación: US2019171908A1 06/06/2019

Solicitante:

UNIV CHICAGO [US]

Resumen de: US2019171908A1

An encoder artificial neural network (ANN) may be configured to receive an input image patch and produce a feature vector therefrom. The encoder ANN may have been trained with a first plurality of domain training images such that an output image patch visually resembling the input image patch can be generated from the feature vector. A generator ANN may be configured to receive the feature vector and produce a generated image patch from the first feature vector. The generator ANN may have been trained with feature vectors derived from a first plurality of domain training images and a second plurality of generative training images such that the generated image patch visually resembles the input image patch but is constructed of a newly-generated image elements visually resembling one or more image patches from the second plurality of generative training images.

traducir

METHOD AND APPARATUS FOR PROCESSING CONVOLUTION OPERATION IN NEURAL NETWORK

NºPublicación: US2019171930A1 06/06/2019

Solicitante:

SAMSUNG ELECTRONICS CO LTD [KR]

Resumen de: US2019171930A1

Provided are a method and apparatus for processing a convolution operation in a neural network, the method includes determining operands from input feature maps and kernels, on which a convolution operation is to be performed, dispatching operand pairs combined from the determined operands to multipliers in a convolution operator, generating outputs by performing addition and accumulation operations with respect to results of multiplication operations, and obtaining pixel values of output feature maps corresponding to a result of the convolution operation based on the generated outputs.

traducir

SYSTEMS AND METHODS FOR FACIAL REPRESENTATION

NºPublicación: US2019171868A1 06/06/2019

Solicitante:

FACEBOOK INC [US]

MX_2016005868_A

Resumen de: US2019171868A1

Systems, methods, and non-transitory computer readable media can align face images, classify face images, and verify face images by employing a deep neural network (DNN). A 3D-aligned face image can be generated from a 2D face image. An identity of the 2D face image can be classified based on provision of the 3D-aligned face image to the DNN. The identity of the 2D face image can comprise a feature vector.

traducir

LANGUAGE PROCESSING METHOD AND APPARATUS

Nº publicación: US2019172466A1 06/06/2019

Solicitante:

SAMSUNG ELECTRONICS CO LTD [KR]

Resumen de: US2019172466A1

A language processing method and apparatus is disclosed. A language processing apparatus using a neural network may obtain context information from a source text using a neural network-based encoder, generate a prefix token from the context information using a neural network-based main decoder, generate a token sequence including at least two successive tokens sequentially following the prefix token using a skip model in response to the prefix token satisfying a preset condition, and indicate a target text in which the prefix token and the token sequence are combined as an inference result with respect to the source text.

traducir

Página1 de 10 nextPage por página

punteroimgVolver