Which of the following describes a neural network without an activation function?
A neural network without an activation function is equivalent to a form of a linear regression. A neural network is a computational model that consists of layers of interconnected nodes (neurons) that process inputs and produce outputs. An activation function is a function that determines the output of a neuron based on its input. An activation function can introduce non-linearity into a neural network, which allows it to model complex and non-linear relationships between inputs and outputs. Without an activation function, a neural network becomes a linear combination of inputs and weights, which is essentially a linear regression model.
You create a prediction model with 96% accuracy. While the model's true positive rate (TPR) is performing well at 99%, the true negative rate (TNR) is only 50%. Your supervisor tells you that the TNR needs to be higher, even if it decreases the TPR. Upon further inspection, you notice that the vast majority of your data is truly positive.
What method could help address your issue?
Oversampling is a method that can help address the issue of imbalanced data, which is when one class is much more frequent than the other in the dataset. This can cause the model to be biased towards the majority class and have a low true negative rate. Oversampling involves creating synthetic samples of the minority class or replicating existing samples to balance the class distribution. This can help the model learn more from the minority class and improve the true negative rate. Reference: [Handling imbalanced datasets in machine learning], [Oversampling and undersampling in data analysis - Wikipedia]
A product manager is designing an Artificial Intelligence (AI) solution and wants to do so responsibly, evaluating both positive and negative outcomes.
The team creates a shared taxonomy of potential negative impacts and conducts an assessment along vectors such as severity, impact, frequency, and likelihood.
Which modeling technique does this team use?
Harms modeling is a technique that helps product managers design AI solutions responsibly by evaluating both positive and negative outcomes. Harms modeling involves creating a shared taxonomy of potential negative impacts and conducting an assessment along vectors such as severity, impact, frequency, and likelihood. Harms modeling can help identify and mitigate any risks or harms that may arise from using AI solutions. Reference: [Harms Modeling for Responsible AI | by Google Developers | Google Developers], [Harms Modeling for Responsible AI - YouTube]
Word Embedding describes a task in natural language processing (NLP) where:
Word embedding is a task in natural language processing (NLP) where words are converted into numerical vectors that represent their meaning, usage, or context. Word embedding can help reduce the dimensionality and sparsity of text data, as well as enable various operations and comparisons among words based on their vector representations. Some of the common methods for word embedding are:
One-hot encoding: One-hot encoding is a method that assigns a unique binary vector to each word in a vocabulary. The vector has only one element with a value of 1 (the hot bit) and the rest with a value of 0. One-hot encoding can create distinct and orthogonal vectors for each word, but it does not capture any semantic or syntactic information about words.
Word2vec: Word2vec is a method that learns a dense and continuous vector representation for each word based on its context in a large corpus of text. Word2vec can capture the semantic and syntactic similarity and relationships among words, such as synonyms, antonyms, analogies, or associations.
GloVe: GloVe (Global Vectors for Word Representation) is a method that combines the advantages of count-based methods (such as TF-IDF) and predictive methods (such as Word2vec) to create word vectors. GloVe can leverage both global and local information from a large corpus of text to capture the co-occurrence patterns and probabilities of words.