option
Cuestiones
ayuda
daypo
buscar.php

1Z0-1127-24 / P5

COMENTARIOS ESTADÍSTICAS RÉCORDS
REALIZAR TEST
Título del Test:
1Z0-1127-24 / P5

Descripción:
1Z0-1127-24 / P5

Fecha de Creación: 2024/06/04

Categoría: Otros

Número Preguntas: 14

Valoración:(1)
COMPARTE EL TEST
Nuevo ComentarioNuevo Comentario
Comentarios
NO HAY REGISTROS
Temario:

What is LangChain?. A JavaScript library for natural language processing. A Ruby library for text generation. A Python library for building applications with Large Language Models. A Java library for text summarization.

Which is a characteristic of T-Few fine-tuning for Large Language Models (LLMs)?. It selectively updates only a fraction of the model's weights. It does not update any weights but restructures the model architecture. It updates all the weights of the model uniformly. It increases the training time as compared to Vanilla fine-tuning.

When does a chain typically interact with memory in a run within the LangChain framework?. After user input but before chain execution, and again after core logic but before output. Continuously throughout the entire chain execution process. Before user input and after chain execution. Only after the output has been generated.

In the context of generating text with a Large Language Model (LLM), what does the process of greedy decoding entail?. Selecting a random word from the entire vocabulary at each step. Choosing the word with the highest probability at each step of decoding. Picking a word based on its position in a sentence structure. Using a weighted random selection based on a modulated distribution.

What does a cosine distance of 0 indicate about the relationship between two embeddings?. They are completely dissimilar. They are unrelated. They have the same magnitude. They are similar in direction.

What do prompt templates use for templating in language model applications?. Python's lambda functions. Python's str.format syntax. Python's list comprehension syntax. Python's class and object structures.

How does the structure of vector databases differ from traditional relational databases?. It is not optimized for high-dimensional spaces. It is based on distances and similarities in a vector space. It uses simple row-based data storage. A vector database stores data in a linear or tabular format.

Given the following code block: history = 1. StreamlitChatMessageHistory(key="chat_messages") 2. memory = ConversationBufferMemory(chat_memory=history) Which statement is NOT true about StreamlitChatMessageHistory?. A given StreamlitChatMessageHistory will not be shared across user sessions. A given StreamlitChatMessageHistory will NOT be persisted. StreamlitChatMessageHistory will store messages in Streamlit session state at the specified key. StreamlitChatMessageHistory can be used in any type of LLM application.

In which scenario is soft prompting appropriate compared to other training styles?. When the model requires continued pretraining on unlabeled data. When there is a need to add learnable parameters to a Large Language Model (LLM) without task-specific training. When the model needs to be adapted to perform well in a domain on which it was not originally trained. When there is a significant amount of labeled, task-specific data available.

How does the temperature setting in a decoding algorithm influence the probability distribution over the vocabulary?. Increasing the temperature flattens the distribution, allowing for more varied word choices. Increasing the temperature removes the impact of the most likely word. Temperature has no effect on probability distribution; it only changes the speed of decoding. Decreasing the temperature broadens the distribution, making less likely words more probable.

What does the RAG Sequence model do in the context of generating a response?. It retrieves relevant documents only for the initial part of the query and ignores the rest. It retrieves a single relevant document for the entire input query and generates a response based on that alone. It modifies the input query before retrieving relevant documents to ensure a diverse response. For each input query, it retrieves a set of relevant documents and considers them together to generate a cohesive response.

What does accuracy measure in the context of fine-tuning results for a generative model?. The depth of the neural network layers used in the model. The number of predictions a model makes, regardless of whether they are correct or incorrect. How many predictions the model made correctly out of all the predictions in an evaluation. The proportion of incorrect predictions made by the model during an evaluation.

Accuracy in vector databases contributes to the effectiveness of Large Language Models (LLMs) by preserving a specific type of relationship. What is the nature of these relationships, and why are they crucial for language models?. Hierarchical relationships; important for structuring database queries. Linear relationships; they simplify the modeling process. Semantic relationships; crucial for understanding context and generating precise language. Temporal relationships; necessary for predicting future linguistic trends.

What is the purpose of Retrieval Augmented Generation (RAG) in text generation?. To retrieve text from an external source and present it without any modifications. To store text in an external database without using it for generation. To generate text based only on the model's internal knowledge without external data. To generate text using extra information obtained from an external data source.

Denunciar Test