mstatt commited on
Commit
a26d00d
1 Parent(s): f22dda1

Upload tokenizer

Browse files
Files changed (3) hide show
  1. README.md +60 -23
  2. tokenizer.json +1 -6
  3. tokenizer_config.json +1 -1
README.md CHANGED
@@ -6,29 +6,66 @@ tags:
6
  - NLP
7
  pipeline_tag: summarization
8
  widget:
9
- - text:
10
- ' Moderator: Welcome, everyone, to this exciting panel discussion. Today, we have Elon Musk and Sam Altman, two of the most influential figures in the tech industry. We’re here to discuss the future of artificial intelligence and its impact on society. Elon, Sam, thank you for joining us.
11
- Elon Musk: Happy to be here.
12
- Sam Altman: Looking forward to the discussion.
13
- Moderator: Let’s dive right in. Elon, you’ve been very vocal about your concerns regarding AI. Could you elaborate on why you believe AI poses such a significant risk to humanity?
14
- Elon Musk: Certainly. AI has the potential to become more intelligent than humans, which could be extremely dangerous if it goes unchecked. The existential threat is real. If we dont implement strict regulations and oversight, we risk creating something that could outsmart us and act against our interests. It’s a ticking time bomb.
15
- Sam Altman: I respect Elon’s concerns, but I think he’s overestimating the threat. The focus should be on leveraging AI to solve some of humanity’s biggest problems. With proper ethical frameworks and robust safety measures, we can ensure AI benefits everyone. The fear-mongering is unproductive and could hinder technological progress.
16
- Elon Musk: It’s not fear-mongering, Sam. It’s being cautious. We need to ensure that we have control mechanisms in place. Without these, we’re playing with fire. You can’t possibly believe that AI will always remain benevolent or under our control.
17
- Sam Altman: Control mechanisms are essential, I agree, but what you’re suggesting sounds like stifling innovation out of fear. We need a balanced approach. Overregulation could slow down advancements that could otherwise save lives and improve quality of life globally. We must foster innovation while ensuring safety, not let fear dictate our actions.
18
- Elon Musk: Balancing innovation and safety is easier said than done. When you’re dealing with something as unpredictable and powerful as AI, the risks far outweigh the potential benefits if we don’t tread carefully. History has shown us the dangers of underestimating new technologies.
19
- Sam Altman: And history has also shown us the incredible benefits of technological advancement. If we had been overly cautious, we might not have the medical, communication, or energy technologies we have today. It’s about finding that middle ground where innovation thrives safely. We can’t just halt progress because of hypothetical risks.
20
- Elon Musk: It’s not hypothetical, Sam. Look at how quickly AI capabilities are advancing. Were already seeing issues with bias, decision-making, and unintended consequences. Imagine this on a larger scale. We can’t afford to be complacent.
21
- Sam Altman: Bias and unintended consequences are exactly why we need to invest in research and development to address these issues head-on. By building AI responsibly and learning from each iteration, we can mitigate these risks. Shutting down or heavily regulating AI development out of fear isn’t the solution.
22
- Moderator: Both of you make compelling points. Let’s fast forward a bit. Say, ten years from now, we have stringent regulations in place, as Elon suggests, or a more flexible framework, as Sam proposes. What does the world look like?
23
- Elon Musk: With stringent regulations, we would have a more controlled and safer AI development environment. This would prevent any catastrophic events and ensure that AI works for us, not against us. We’d be able to avoid many potential disasters that an unchecked AI might cause.
24
- Sam Altman: On the other hand, with a more flexible framework, wed see rapid advancements in AI applications across various sectors, from healthcare to education, bringing significant improvements to quality of life and solving problems that seem insurmountable today. The world would be a much better place with these innovations.
25
- Moderator: And what if both of you are wrong?
26
- Elon Musk: Wrong?
27
- Sam Altman: How so?
28
- Moderator: Suppose the future shows that neither stringent regulations nor a flexible framework were the key factors. Instead, what if the major breakthroughs and safety measures came from unexpected areas like quantum computing advancements or new forms of human-computer symbiosis, rendering this entire debate moot?
29
- Elon Musk: Well, that’s a possibility. If breakthroughs in quantum computing or other technologies overshadow our current AI concerns, it could change the entire landscape. It’s difficult to predict all variables.
30
- Sam Altman: Agreed. Technology often takes unexpected turns. If future advancements make our current debate irrelevant, it just goes to show how unpredictable and fast-moving the tech world is. The key takeaway would be the importance of adaptability and continuous learning.
31
- Moderator: Fascinating. It appears that the only certainty in the tech world is uncertainty itself. Thank you both for this engaging discussion.'
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
32
  example_title: Sample 1
33
  ---
34
  # Arc of the Conversation Model
 
6
  - NLP
7
  pipeline_tag: summarization
8
  widget:
9
+ - text: ' Moderator: Welcome, everyone, to this exciting panel discussion. Today,
10
+ we have Elon Musk and Sam Altman, two of the most influential figures in the tech
11
+ industry. We’re here to discuss the future of artificial intelligence and its
12
+ impact on society. Elon, Sam, thank you for joining us. Elon Musk: Happy to be
13
+ here. Sam Altman: Looking forward to the discussion. Moderator: Let’s dive right
14
+ in. Elon, youve been very vocal about your concerns regarding AI. Could you elaborate
15
+ on why you believe AI poses such a significant risk to humanity? Elon Musk: Certainly.
16
+ AI has the potential to become more intelligent than humans, which could be extremely
17
+ dangerous if it goes unchecked. The existential threat is real. If we don’t implement
18
+ strict regulations and oversight, we risk creating something that could outsmart
19
+ us and act against our interests. It’s a ticking time bomb. Sam Altman: I respect
20
+ Elon’s concerns, but I think hes overestimating the threat. The focus should
21
+ be on leveraging AI to solve some of humanity’s biggest problems. With proper
22
+ ethical frameworks and robust safety measures, we can ensure AI benefits everyone.
23
+ The fear-mongering is unproductive and could hinder technological progress. Elon
24
+ Musk: It’s not fear-mongering, Sam. Its being cautious. We need to ensure that
25
+ we have control mechanisms in place. Without these, we’re playing with fire. You
26
+ can’t possibly believe that AI will always remain benevolent or under our control.
27
+ Sam Altman: Control mechanisms are essential, I agree, but what you’re suggesting
28
+ sounds like stifling innovation out of fear. We need a balanced approach. Overregulation
29
+ could slow down advancements that could otherwise save lives and improve quality
30
+ of life globally. We must foster innovation while ensuring safety, not let fear
31
+ dictate our actions. Elon Musk: Balancing innovation and safety is easier said
32
+ than done. When you’re dealing with something as unpredictable and powerful as
33
+ AI, the risks far outweigh the potential benefits if we don’t tread carefully.
34
+ History has shown us the dangers of underestimating new technologies. Sam Altman:
35
+ And history has also shown us the incredible benefits of technological advancement.
36
+ If we had been overly cautious, we might not have the medical, communication,
37
+ or energy technologies we have today. It’s about finding that middle ground where
38
+ innovation thrives safely. We can’t just halt progress because of hypothetical
39
+ risks. Elon Musk: It’s not hypothetical, Sam. Look at how quickly AI capabilities
40
+ are advancing. We’re already seeing issues with bias, decision-making, and unintended
41
+ consequences. Imagine this on a larger scale. We can’t afford to be complacent.
42
+ Sam Altman: Bias and unintended consequences are exactly why we need to invest
43
+ in research and development to address these issues head-on. By building AI responsibly
44
+ and learning from each iteration, we can mitigate these risks. Shutting down or
45
+ heavily regulating AI development out of fear isn’t the solution. Moderator: Both
46
+ of you make compelling points. Let’s fast forward a bit. Say, ten years from now,
47
+ we have stringent regulations in place, as Elon suggests, or a more flexible framework,
48
+ as Sam proposes. What does the world look like? Elon Musk: With stringent regulations,
49
+ we would have a more controlled and safer AI development environment. This would
50
+ prevent any catastrophic events and ensure that AI works for us, not against us.
51
+ We’d be able to avoid many potential disasters that an unchecked AI might cause.
52
+ Sam Altman: On the other hand, with a more flexible framework, we’d see rapid
53
+ advancements in AI applications across various sectors, from healthcare to education,
54
+ bringing significant improvements to quality of life and solving problems that
55
+ seem insurmountable today. The world would be a much better place with these innovations.
56
+ Moderator: And what if both of you are wrong? Elon Musk: Wrong? Sam Altman: How
57
+ so? Moderator: Suppose the future shows that neither stringent regulations nor
58
+ a flexible framework were the key factors. Instead, what if the major breakthroughs
59
+ and safety measures came from unexpected areas like quantum computing advancements
60
+ or new forms of human-computer symbiosis, rendering this entire debate moot? Elon
61
+ Musk: Well, that’s a possibility. If breakthroughs in quantum computing or other
62
+ technologies overshadow our current AI concerns, it could change the entire landscape.
63
+ It’s difficult to predict all variables. Sam Altman: Agreed. Technology often
64
+ takes unexpected turns. If future advancements make our current debate irrelevant,
65
+ it just goes to show how unpredictable and fast-moving the tech world is. The
66
+ key takeaway would be the importance of adaptability and continuous learning.
67
+ Moderator: Fascinating. It appears that the only certainty in the tech world is
68
+ uncertainty itself. Thank you both for this engaging discussion.'
69
  example_title: Sample 1
70
  ---
71
  # Arc of the Conversation Model
tokenizer.json CHANGED
@@ -1,11 +1,6 @@
1
  {
2
  "version": "1.0",
3
- "truncation": {
4
- "direction": "Right",
5
- "max_length": 1024,
6
- "strategy": "LongestFirst",
7
- "stride": 0
8
- },
9
  "padding": null,
10
  "added_tokens": [
11
  {
 
1
  {
2
  "version": "1.0",
3
+ "truncation": null,
 
 
 
 
 
4
  "padding": null,
5
  "added_tokens": [
6
  {
tokenizer_config.json CHANGED
@@ -930,7 +930,7 @@
930
  "clean_up_tokenization_spaces": true,
931
  "eos_token": "</s>",
932
  "extra_ids": 100,
933
- "max_length": 1024,
934
  "model_max_length": 512,
935
  "pad_token": "<pad>",
936
  "stride": 0,
 
930
  "clean_up_tokenization_spaces": true,
931
  "eos_token": "</s>",
932
  "extra_ids": 100,
933
+ "max_length": 2048,
934
  "model_max_length": 512,
935
  "pad_token": "<pad>",
936
  "stride": 0,