REDES NEURONALES

VolverVolver

Resultados 31 resultados LastUpdate Última actualización 04/06/2023 [14:38:00] pdf PDF xls XLS

Solicitudes publicadas en los últimos 30 días / Applications published in the last 30 days



Página1 de 2 nextPage   por página


STEREO DEPTH ESTIMATION USING DEEP NEURAL NETWORKS

NºPublicación: US2023169321A1 01/06/2023

Solicitante:

NVIDIA CORP [US]

US_2021326678_A1

Resumen de: US2023169321A1

Various examples of the present disclosure include a stereoscopic deep neural network (DNN) that produces accurate and reliable results in real-time. Both LIDAR data (supervised training) and photometric error (unsupervised training) may be used to train the DNN in a semi-supervised manner. The stereoscopic DNN may use an exponential linear unit (ELU) activation function to increase processing speeds, as well as a machine learned argmax function that may include a plurality of convolutional layers having trainable parameters to account for context. The stereoscopic DNN may further include layers having an encoder/decoder architecture, where the encoder portion of the layers may include a combination of three-dimensional convolutional layers followed by two-dimensional convolutional layers.

traducir

TWO-PASS END TO END SPEECH RECOGNITION

NºPublicación: AU2023202949A1 01/06/2023

Solicitante:

GOOGLE LLC [US]

US_2022310072_PA

Resumen de: AU2023202949A1

Abstract Two-pass automatic speech recognition (ASR) models can be used to perform streaming on- device ASR to generate a text representation of an utterance captured in audio data. Various implementations in clude a first-pass portion of the ASR model used to generate streaming can didate recognition(s) of an utterance captured in audio data. For example, the first-pass portion can include a recurrent neural network transformer (RNN-T) decoder. Various implementations include a second-pass portion of the ASR model used to revise the streaming candidate recognition(s) of the utterance and generate a text representation of the utterance. For example, the second-pass portion can include a listen attend spell (LAS) decoder. Various implementations include a shared encoder shared between the RNN-T decoder and the LAS decoder.

traducir

SPATIALLY SPARSE NEURAL NETWORK ACCELERATOR FOR MULTI-DIMENSION VISUAL ANALYTICS

NºPublicación: US2023169319A1 01/06/2023

Solicitante:

INTEL CORP [US]

JP_2022059564_A

Resumen de: US2023169319A1

Systems, apparatuses and methods may provide for technology that decodes data via an instruction that indicates a number of rulebooks to be processed, an input feature size, an output feature size, and a plurality of feature map base addresses, rearranges spatially distributed voxel output feature maps in the decoded data based on weight planes, and performs a channel-wise multiply-accumulate (MAC) operation on the rearranged spatially distributed voxel output feature maps to obtain an output, wherein the channel-wise MAC operation is performed as partial accumulations by a plurality of processing elements.

traducir

SYSTEM AND METHOD FOR RECOMMENDING CONTENTS BASED ON DEEP NEURAL NETWORK

NºPublicación: US2023169330A1 01/06/2023

Solicitante:

TVSTORM CO LTD [KR]

Resumen de: US2023169330A1

The present inventive concept relates to a deep neural network-based contents recommendation system and method, wherein a server and a client terminal share a learning function and a recommendation function, respectively, and drafted data for contents and user are respectively generated according to the property of contents and user preference property for the contents, thus relating to a system and method for providing a personalized contents recommendation service in consideration of both contents and user properties by utilizing the generated each of the drafted data.

traducir

Continuous Convolution and Fusion in Neural Networks

NºPublicación: US2023169347A1 01/06/2023

Solicitante:

UATC LLC [US]

US_2019147335_PA

Resumen de: US2023169347A1

Systems and methods are provided for machine-learned models including convolutional neural networks that generate predictions using continuous convolution techniques. For example, the systems and methods of the present disclosure can be included in or otherwise leveraged by an autonomous vehicle. In one example, a computing system can perform, with a machine-learned convolutional neural network, one or more convolutions over input data using a continuous filter relative to a support domain associated with the input data, and receive a prediction from the machine-learned convolutional neural network. A machine-learned convolutional neural network in some examples includes at least one continuous convolution layer configured to perform convolutions over input data with a parametric continuous kernel.

traducir

A method and an electronic device for generating training data for learning a neural network model and a storage medium storing a program for executing the method thereof

NºPublicación: KR20230075923A 31/05/2023

Solicitante:

재단법인대구경북과학기술원

Resumen de: KR20230075923A

제1 신경망 모델을 학습하기 위한 학습 데이터를 생성하는 방법 및 이를 실행시키기 위한 프로그램을 저장하는 저장 매체, 그리고 전자 장치가 제공된다. 본 학습 데이터 생성 방법은, 이미지를 획득하는 단계, 제2 신경망 모델을 이용하여 이미지를 크롭하기 위한 사이즈를 결정하는 단계, 결정된 사이즈에 기초하여 크롭된 이미지의 레이블 값을 획득하는 단계 및 크롭된 이미지와 상기 레이블 값에 기초하여 학습 데이터를 획득하는 단계를 포함한다.

traducir

METHOD AND SYSTEM FOR ENCODING AND DECODING DATA BASED ON DEEP NEURAL NETWORK

NºPublicación: KR20230075247A 31/05/2023

Solicitante:

서울대학교산학협력단

Resumen de: KR20230075247A

본 발명에 따른 심층신경망 기반의 데이터 압축 및 복원 방법은, 사후 분석을 위해 원본에 가까운 데이터가 필요한 심층학습 응용 서비스에 사용되는 데이터를 압축 및 복원하는 방법에 있어서, 심층신경망에 필요한 정보만을 압축하여 복원이 불가능하지만 상대적으로 높은 압축률을 갖는 제 1 압축 데이터를 생성하는 단계와, 서버로부터의 추가 데이터의 전송 요청에 따라 데이터 복원에 필요한 정보만을 갖는 제 2 압축 데이터를 생성하여 상기 서버로 전송하는 단계와, 상기 제 1 압축 데이터와 상기 제 2 압축 데이터를 병합하여 병합된 압축 데이터를 생성한 후 원본 데이터로 복원하는 단계를 포함할 수 있다.

traducir

METHOD AND APPARATUS FOR FINDING OPTIMAL PATH BY APPLYING ARTIFICIAL NEURAL NETWORK

NºPublicación: KR20230074058A 26/05/2023

Solicitante:

김태호

Resumen de: KR20230074058A

인공신경망을 적용한 최적 경로 탐색 방법 및 장치가 개시된다. 본 개시의 일 실시 예에 따른 인공신경망을 적용한 최적 경로 탐색 방법은, 복수의 경로 탐색 알고리즘 중 선택된 경로 탐색 알고리즘을 기반으로, 이동을 결정하고자 하는 물류 상태를 시작 노드로 설정하여, 시작 노드에서 그래프를 확장하며 기 설정된 횟수만큼 경로를 탐색하는 단계와, 시작 노드에 연계된 각 자식(child) 노드 그룹에 각각 속해 있는 기 선택되어 더 이상 확장 불가한 노드 수에 기반하여, 시작 노드에서의 각 엣지(edge) 에 대한 실행 확률을 추론하는 단계와, 추론한 실행 확률에 기반하여 엣지를 결정하고 다음 물류 상태를 선택해가며 최적 경로를 탐색하는 단계를 포함할 수 있다.

traducir

End-to-End Streaming Keyword Spotting

NºPublicación: US2023162729A1 25/05/2023

Solicitante:

GOOGLE LLC [US]

KR_20230006055_PA

Resumen de: US2023162729A1

A method for detecting a hotword includes receiving a sequence of input frames that characterize streaming audio captured by a user device and generating a probability score indicating a presence of a hotword in the streaming audio using a memorized neural network. The network includes sequentially-stacked single value decomposition filter (SVDF) layers and each SVDF layer includes at least one neuron. Each neuron includes a respective memory component, a first stage configured to perform filtering on audio features of each input frame individually and output to the memory component, and a second stage configured to perform filtering on all the filtered audio features residing in the respective memory component. The method also includes determining whether the probability score satisfies a hotword detection threshold and initiating a wake-up process on the user device for processing additional terms.

traducir

ENCODING RELATIVE OBJECT INFORMATION INTO NODE EDGE FEATURES

NºPublicación: US2023159059A1 25/05/2023

Solicitante:

ZOOX INC [US]

Resumen de: US2023159059A1

Techniques for determining unified futures of objects in an environment are discussed herein. Techniques may include determining a first feature associated with an object in an environment and a second feature associated with the environment and based on a position of the object in the environment, updating a graph neural network (GNN) to encode the first feature and second feature into a graph node representing the object and encode relative positions of additional objects in the environment into one or more edges attached to the node. The GNN may be decoded to determine a predicted position of the object at a subsequent timestep. Further, a predicted trajectory of the object may be determined using predicted positions of the object at various timesteps.

traducir

GRAPH NEURAL NETWORK (GNN)-BASED PREDICTION SYSTEM FOR TOTAL ORGANIC CARBON (TOC) IN SHALE

NºPublicación: US2023162052A1 25/05/2023

Solicitante:

INST GEOLOGY & GEOPHYSICS CAS [CN]

CN_113837501_PA

Resumen de: US2023162052A1

A graph neural network (GNN)-based prediction system for total organic carbon (TOC) in shale solves the problem that the existing shale TOC prediction method cannot fully analyze the complex nonlinear relationship between all logging curves and TOC. The prediction system adopts a method including: acquiring and preprocessing a plurality of logging curves of a target well location in a target shale bed to acquire a plurality of standardized logging curves, windowing the plurality of standardized logging curves, and inputting the windowed logging curves and weight matrix into a trained GNN-based TOC prediction network to acquire TOC of the target well location. The prediction system inputs the plurality of logging curves as correlative multi-dimensional dynamic graph data for analysis and can acquire the complex nonlinear relationship between the logging curves and TOC, thus improving the prediction accuracy of TOC.

traducir

IMAGE PROCESSING METHOD AND SYSTEM, DEVICE, AND MEDIUM

NºPublicación: WO2023087597A1 25/05/2023

Solicitante:

INSPUR SUZHOU INTELLIGENT TECHNOLOGY CO LTD [CN]

CN_113822287_A

Resumen de: WO2023087597A1

An image processing method and system, a computer device, and a readable storage medium. The method comprises the following steps: preprocessing an image in an initial data set to obtain a training data set (S1); training an image segmentation neural network by using the training data set (S2); removing the last loss function layer of the trained image segmentation neural network to obtain an inference network (S3); inputting the training data set into the inference network so as to obtain a plurality of logic vectors (S4); training a check network according to the plurality of logic vectors, the initial data set, and the mask of each image in the initial data set (S5); and using the inference network and the trained check network to perform inference on an image to be processed, so as to obtain the mask of the image to be processed (S6). According to the method, a solution is provided for the condition of the memory overflow of a high-resolution image during large-scale image segmentation network training, and the video memory required by network training is reduced while the image segmentation precision is ensured.

traducir

AUDIO-BASED IDENTIFICATION INTERFACES FOR SELECTING OBJECTS FROM VIDEO

NºPublicación: US2023162209A1 25/05/2023

Solicitante:

REVEALIT CORP [US]

US_2023153836_PA

Resumen de: US2023162209A1

A method, system, and device for audio-based identification interfaces for selecting objects from video generates and stores frequency-based audio identifiers associated with segments of an audio stream that is integrated with a video stream. The generation of the frequency-based audio identifiers may be performed by a hashing function applied to audio frequencies within audio segments. The video stream comprises identified objects that may be identified by application of a trained neural network. An audio segment is received from a user and a corresponding frequency-based audio identifier is generated and matched against stored frequency-based audio identifiers. The matching determines an audio segment and a temporally corresponding identified object, which is then embodied within an interactive user interface.

traducir

HIGH-PERFORMANCE COMPUTING SYSTEM FOR ACCELERATING DEEP LEARNING BASED GRAPH CONVOLUTIONAL NEURAL NETWORKS AND METHOD OF THE SAME

NºPublicación: KR20230071481A 23/05/2023

Solicitante:

한국과학기술원

Resumen de: KR20230071481A

다양한 실시예들은 딥러닝 기반 그래프 합성곱 신경망(graph convolutional neural network; GCN) 추론 가속을 위한 고성능 컴퓨팅 시스템 및 그의 방법을 제공한다. 다양한 실시예들에 따르면, 컴퓨팅 시스템은 그래프의 노드들로부터 인풋 특징 벡터들을 추출하는 집계 단계(aggregation phase), 및 인풋 특징 벡터들을 이용하여, 아웃풋 특징 벡터들을 도출하는 결합 단계(combination phase)를 수행하도록 구성되고, 집계 단계 및 결합 단계의 각각은, 행별 곱(row-wise product)을 기반으로 하는 희소-밀집 행렬 곱셈(sparse-dense general matrix multiplication; SpDeGEMM)을 통해, 수행된다.

traducir

SPARSITY-AWARE DATASTORE FOR INFERENCE PROCESSING IN DEEP NEURAL NETWORK ARCHITECTURES

NºPublicación: WO2023086702A1 19/05/2023

Solicitante:

INTEL CORP [US]
MATHAIKUTTY DEEPAK [US]
RAHA ARNAB [US]
SUNG RAYMOND [US]
MOHAPATRA DEBABRATA [US]
BRICK CORMAC [US]

US_2022067524_A1

Resumen de: WO2023086702A1

Systems, apparatuses and methods may provide for technology that prefetches compressed data and a sparsity bitmap from a memory to store the compressed data in a decode buffer, where the compressed data is associated with a plurality of tensors, wherein the compressed data is in a compressed format. The technology aligns the compressed data with the sparsity bitmap to generate decoded data, and provides the decoded data to a plurality of processing elements.

traducir

REINFORCEMENT LEARNING WITH INDUCTIVE LOGIC PROGRAMMING

NºPublicación: WO2023083113A1 19/05/2023

Solicitante:

IBM [US]
IBM CHINA CO LTD [CN]

US_2023143937_PA

Resumen de: WO2023083113A1

Methods and systems for training a model and automated motion include learning Markov decision processes using reinforcement learning in respective training environments. Logic rules are extracted from the Markov decision processes. T reward logic neural network (LNN) and a safety LNN are trained using the logic rules extracted from the Markov decision processes. The reward LNN and the safety LNN each take a state-action pair as an input and output a corresponding score for the state-action pair.

traducir

INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND INFORMATION PROCESSING PROGRAM

NºPublicación: WO2023085191A1 19/05/2023

Solicitante:

SONY GROUP CORP [JP]

Resumen de: WO2023085191A1

This information processing device (100) comprises: an acquisition unit (131) for acquiring an utterance group that includes a plurality of turns which are elements of an utterance that are delimited under a prescribed condition; a preprocessing unit (132) for synthesizing, into one sentence and inputting to a pre-trained model, the current turn which is an object within the utterance group for which output is to be obtained in a prescribed task and a plurality of turns located before and after the current turn in a time sequence, so as to output a feature quantity that corresponds to each turn; and an estimation unit (133) for obtaining output that corresponds to the prescribed task, using a neural network that applies, in one of intermediate layers of the neural network to which the feature quantity outputted by the preprocessing unit is inputted, a prescribed weight to the score that is outputted from an attention mechanism (attention) for the feature quantity that corresponds to the current turn.

traducir

SYSTEMS AND METHODS OF 3D OBJECT RECONSTRUCTION USING A NEURAL NETWORK

NºPublicación: AU2021331691A1 18/05/2023

Solicitante:

ARTEC EUROPE S A R L

CN_115885315_PA

Resumen de: AU2021331691A1

In accordance with some embodiments, a method is provided for determining correspondence between a projection pattern and an image of the project ion pattern shone onto the surface of an object. The method includes obtaining an image of an object while a projection pattern is shone on the surface of the object. The method further includes, using a neural network to output a correspondence between respective pixels in the image and coordinates of the projection pattern. The method further includes, using the correspondence between respective pixels in the image and coordinates of the projection pattern, reconstructing a shape of the surface of the object.

traducir

CREATING AN ACCURATE LATENCY LOOKUP TABLE FOR NPU

NºPublicación: US2023153569A1 18/05/2023

Solicitante:

SAMSUNG ELECTRONICS CO LTD [KR]

CN_116151371_PA

Resumen de: US2023153569A1

A system and a method are disclosed for estimating a latency of a layer of a neural network. A host processing device adds an auxiliary layer to a selected layer of the neural network. A neural processing unit executes an inference operation over the selected layer and the auxiliary layer. A total latency is measured for the inference operation for the selected layer and the auxiliary layer, and an overhead latency is measured for the inference operation. The overhead latency is subtracted from the total latency to generate an estimate of the latency of the layer. In one embodiment, measuring the overhead latency for the inference operation that is associated with the auxiliary layer involves modeling the overhead latency based on a linear regression of an input data size that is input to the selected layer, and an output data size that is output from the auxiliary layer.

traducir

EXPLAINABLE TRANSDUCER TRANSFORMERS

NºPublicación: US2023153599A1 18/05/2023

Solicitante:

UMNAI LTD [MT]

US_2022198254_A1

Resumen de: US2023153599A1

An explainable transducer transformer (XTT) may be a finite state transducer, together with an Explainable Transformer. Variants of the XTT may include an explainable Transformer-Encoder and an explainable Transformer-Decoder. An exemplary Explainable Transducer may be used as a partial replacement in trained Explainable Neural Network (XNN) architectures or logically equivalent architectures. An Explainable Transformer may replace black-box model components of a Transformer with white-box model equivalents, in both the sub-layers of the encoder and decoder layers of the Transformer. XTTs may utilize an Explanation and Interpretation Generation System (EIGS), to generate explanations and filter such explanations to produce an interpretation of the answer, explanation, and its justification.

traducir

INCENTIVIZED NEURAL NETWORK TRAINING AND ASSURANCE PROCESSES

NºPublicación: US2023153836A1 18/05/2023

Solicitante:

REVEALIT CORP [US]

US_2023162209_PA

Resumen de: US2023153836A1

A method and system for incentivized neural network training and assurance processes provides incentives to object miners to identify objects in video streams for the purposes of enhancing the training of computer-implemented neural networks on the identified objects and/or augmenting the results of automatic object identification by trained neural networks. An object mining user interface and process is provided to object miners that provides incentives for identifying objects in video streams and technical capabilities for designating identified objects within multiple multi-dimensional regions of pixels. Incentives may be token-based and in accordance with end user interactions within a visual user interface with representations of the miner-identified objects within a video stream.

traducir

Semi-Supervised Machine Learning Method and System Suitable for Identification of Patient Subgroups in Electronic Healthcare Records

NºPublicación: US2023154627A1 18/05/2023

Solicitante:

SENSYNE HEALTH GROUP LTD [GB]

WO_2021170735_A1

Resumen de: US2023154627A1

A computer-implemented method of training an artificial neural network is provided. The artificial neural network comprises a feedforward autoencoder and a classification layer, the feedforward autoencoder comprising an input layer of neurons, an output layer of neurons, an embedding layer of neurons between the input and output layers, one or more non-linear intermediate layers of neurons between the input and embedding layer, one or more further non-linear intermediate layers of neurons between the embedding and output layer and respective network weights for each layer, wherein the number of neurons in the embedding layer is less than the number of neurons in the input layer. Data units are defined within a data space to have a data value for each dimension of the data space and a classification label associated with each data unit indicating one of a plurality of groups to which each data sample belongs. The network is trained using a combination of unsupervised learning to reduce a reconstruction loss of the autoencoder and supervised learning to reduce a classification loss of the classification layer. The resulting network finds application in providing embeddings for classification or clustering, for example in the context of analysing phenotypical data, such as physiological data, which may be obtained from electronic health records. The classification or clustering may be used to predict or identify pathologies or to analyse patient subgroups to analyse, for example

traducir

LUNG ULTRASOUND PROCESSING SYSTEMS AND METHODS

NºPublicación: US2023148996A1 18/05/2023

Solicitante:

ALVEOLAI DATAFUEL INC [CA]

WO_2022016262_A1

Resumen de: US2023148996A1

Methods and systems for processing data to distinguish between a plurality of conditions in lung ultrasound images and, in particular, lung ultrasound images containing B lines. Neural network systems and methods, in which the processor is trained using lung ultrasound images to distinguish between acute respiratory distress syndrome due to COVID-19, acute respiratory distress syndrome due to non-COVID-19 causes, and hydrostatic pulmonary edema.

traducir

SCALING HALF-PRECISION FLOATING POINT TENSORS FOR TRAINING DEEP NEURAL NETWORKS

NºPublicación: US2023141038A1 11/05/2023

Solicitante:

INTEL CORP [US]

US_2022269931_PA

Resumen de: US2023141038A1

A graphics processor is described that includes a single instruction, multiple thread (SIMT) architecture including hardware multithreading. The multiprocessor can execute parallel threads of instructions associated with a command stream, where the multiprocessor includes a set of functional units to execute at least one of the parallel threads of the instructions. The set of functional units can include a mixed precision tensor processor to perform tensor computations to generate loss data. The loss data is stored as a first floating-point data type and scaled by a scaling factor to enable a data distribution of a gradient tensor generated based on the loss data to be represented by a second floating point data type.

traducir

NEURAL NETWORK TRAINING WITH ACCELERATION

Nº publicación: US2023146611A1 11/05/2023

Solicitante:

SAMSUNG ELECTRONICS CO LTD [KR]

CN_116108881_PA

Resumen de: US2023146611A1

A system and method for training a neural network. In some embodiments, the system includes a computational storage device including a backing store. The computational storage device may be configured to: store, in the backing store, an embedding table for a neural network embedding operation; receive a first index vector including a first index and a second index; retrieve, from the backing store: a first row of the embedding table, corresponding to the first index, and a second row of the embedding table, corresponding to the second index; and calculate a first embedded vector based on the first row and the second row.

traducir

Página1 de 2 nextPage por página

punteroimgVolver