just start out small and when it gives you code that works, just iterate over it with more modification requests.this thread tho
It's still wild to me that you can generate whole userscripts like that
I actually hadn't tried any AI tools yet, I had been avoiding them until yesterday - I used ChatGPT to help me write part of my LinkedIn profile - and... it was even easier than I expected it to be (though I'm not sure how much I actually used, it generated some good content)just start out small and when it gives you code that works, just iterate over it with more modification requests.
people absolutely have a need for this but they don't even know yet, it's like how when a lot more people started going online and doing online shopping when the iphone came out. knew the power of the internet way before that.
https://twitter.com/username/status/1234567890123456
https://twitter.com/vangeorgh/status/1759168160052162682
and clicked the bookmarklet, the contents copied to clipboard will look like this:https://twitter.com/vangeorgh/status/1759168160052162682
[SPOILER="thread continued"]
https://twitter.com/vangeorgh/status/1759168173264224631
https://twitter.com/vangeorgh/status/1759168181795520605
https://twitter.com/vangeorgh/status/1759168195196248237
https://twitter.com/vangeorgh/status/1759168210039976426
https://twitter.com/vangeorgh/status/1759168216025248071
https://twitter.com/vangeorgh/status/1759168228989751460
https://twitter.com/vangeorgh/status/1759168243086897355
https://twitter.com/vangeorgh/status/1759168256655462721
[/SPOILER]
[SPOILER="larger images"]
[img]https://pbs.twimg.com/media/GGnSHRCXUAAFuib.jpg[/img]
[img]https://pbs.twimg.com/media/GGnSIBTXwAAqxN4.jpg[/img]
[img]https://pbs.twimg.com/media/GGnSIBPXsAAjTey.jpg[/img]
[img]https://pbs.twimg.com/media/GGnSIBTWkAAEoQs.jpg[/img]
[img]https://pbs.twimg.com/media/GGnSIBOX0AAAmCA.jpg[/img]
[img]https://pbs.twimg.com/media/GGnSJRBXsAAPBfw.jpg[/img]
[img]https://pbs.twimg.com/media/GGnSJRAW4AAuqG1.jpg[/img]
[img]https://pbs.twimg.com/media/GGnSJRVXsAACRCJ.jpg[/img]
[img]https://pbs.twimg.com/media/GGnSKKkWsAAcfRJ.jpg[/img]
[img]https://pbs.twimg.com/media/GGnSKKbXMAENx9z.jpg[/img]
[img]https://pbs.twimg.com/media/GGnSKKZXMAMJYJm.jpg[/img]
[img]https://pbs.twimg.com/media/GGnSLS9WoAAow1K.jpg[/img]
[img]https://pbs.twimg.com/media/GGnSLTCXEAARPsw.jpg[/img]
[img]https://pbs.twimg.com/media/GGnSLS-XMAANhxo.jpg[/img]
[img]https://pbs.twimg.com/media/GGnSLTEWMAA8rsK.jpg[/img]
[img]https://pbs.twimg.com/media/GGnSMGdXoAAYO-Q.jpg[/img]
[img]https://pbs.twimg.com/media/GGnSMGaWAAAkK2j.jpg[/img]
[img]https://pbs.twimg.com/media/GGnSMGhWEAAgGrC.jpg[/img]
[img]https://pbs.twimg.com/media/GGnSMGcWoAA80kq.jpg[/img]
[img]https://pbs.twimg.com/media/GGnSM7CXkAAm5Es.jpg[/img]
[img]https://pbs.twimg.com/media/GGnSM7DW4AAC9Sd.jpg[/img]
[img]https://pbs.twimg.com/media/GGnSM7CWUAA3EnS.jpg[/img]
[/SPOILER]
https://twitter.com/bindureddy/status/1730248977499762740
[SPOILER="thread continued"]
https://twitter.com/bindureddy/status/1730248977499762740
https://twitter.com/bindureddy/status/1730370773242757463
https://twitter.com/bindureddy/status/1730607977294627299
[/SPOILER]
[SPOILER="full text"]
1/3
A day doesn't go by without a powerful open-source AI model drop.
DeepSeek (67B) makes up for the open-source shortcomings - math and coding!!
It beats Claude-2, scoring 65 on a National High School Exam.
DeepSeek is from China and is proof that the Chinese don't need our LLM tech; they can develop their own and are enlightened enough to open-source it!!
We are taking a look this week and will make it available in the Abacus AI platform next.
2/3
Can you elaborate more?
3/3
Thanks, I will dive into the details of the eval today - on first glance the model seems pretty good and is much faster than Llama-2
[/SPOILER]
[SPOILER="larger images"]
[img]https://pbs.twimg.com/media/GAMUJ5TbEAAE7hp.jpg[/img]
[/SPOILER]
// ==UserScript==
// @name Add Live Character Counter to Post Form
// @namespace http://tampermonkey.net/
// @version 0.1
// @description Adds a live character counter next to the "Post Reply" button in the post thread form on websites with ".com/threads/*" in the URL. The counter has a limit of 5000 characters (excluding spaces) and displays a warning message if the limit is exceeded. Works with both HTML and BBCode editor modes.
// @author Your Name
// @match *://*.com/threads/*
// @grant none
// ==/UserScript==
(function() {
'use strict';
function addCharacterCounter() {
var postForm = document.querySelector('.message-editorWrapper');
if (postForm) {
var textarea = postForm.querySelector('textarea.input[name="message"]');
var richTextEditor = postForm.querySelector('.fr-element.fr-view');
if (textarea || richTextEditor) {
var formButtonGroup = postForm.querySelector('.formButtonGroup');
if (formButtonGroup) {
var counterContainer = document.createElement('div');
counterContainer.classList.add('character-counter');
counterContainer.style.display = 'flex';
counterContainer.style.alignItems = 'center';
counterContainer.style.marginRight = '10px';
var counterLabel = document.createElement('span');
counterLabel.style.marginRight = '5px';
counterLabel.textContent = '0/5000';
var warningMessage = document.createElement('span');
warningMessage.style.color = 'red';
warningMessage.style.fontWeight = 'normal';
warningMessage.textContent = 'Character limit exceeded';
warningMessage.style.display = 'none';
counterContainer.appendChild(counterLabel);
counterContainer.appendChild(warningMessage);
formButtonGroup.insertBefore(counterContainer, formButtonGroup.firstChild);
var updateCounter = function() {
var inputText = '';
if (textarea && textarea.style.display !== 'none') {
inputText = textarea.value;
} else if (richTextEditor && richTextEditor.style.display !== 'none') {
inputText = richTextEditor.textContent;
}
var nonSpaceChars = inputText.replace(/\s/g, '').length;
var charCount = nonSpaceChars + '/5000';
if (nonSpaceChars > 5000) {
counterLabel.style.color = 'red';
counterLabel.style.fontWeight = 'bold';
warningMessage.style.display = 'inline';
} else {
counterLabel.style.color = '';
counterLabel.style.fontWeight = 'normal';
warningMessage.style.display = 'none';
}
counterLabel.textContent = charCount;
};
var handleInputChange = function() {
updateCounter();
};
if (textarea) {
textarea.addEventListener('input', handleInputChange);
}
if (richTextEditor) {
var observer = new MutationObserver(handleInputChange);
observer.observe(richTextEditor, { childList: true, characterData: true, subtree: true });
}
updateCounter(); // Call updateCounter initially
var editorModeObserver = new MutationObserver(function(mutations) {
mutations.forEach(function(mutation) {
if (mutation.type === 'attributes' && mutation.attributeName === 'class') {
updateCounter();
}
});
});
var editorContainer = document.querySelector('.fr-box');
if (editorContainer) {
editorModeObserver.observe(editorContainer, { attributes: true, attributeFilter: ['class'] });
}
}
}
}
}
window.addEventListener('load', addCharacterCounter);
})();
javascript:(function() {
function addCharacterCounter() {
var postForm = document.querySelector('.message-editorWrapper');
if (postForm) {
var textarea = postForm.querySelector('textarea.input[name="message"]');
var richTextEditor = postForm.querySelector('.fr-element.fr-view');
if (textarea || richTextEditor) {
var formButtonGroup = postForm.querySelector('.formButtonGroup');
if (formButtonGroup) {
var counterContainer = document.createElement('div');
counterContainer.classList.add('character-counter');
counterContainer.style.display = 'flex';
counterContainer.style.alignItems = 'center';
counterContainer.style.marginRight = '10px';
var counterLabel = document.createElement('span');
counterLabel.style.marginRight = '5px';
counterLabel.textContent = '0/5000';
var warningMessage = document.createElement('span');
warningMessage.style.color = 'red';
warningMessage.style.fontWeight = 'normal';
warningMessage.textContent = 'Character limit exceeded';
warningMessage.style.display = 'none';
counterContainer.appendChild(counterLabel);
counterContainer.appendChild(warningMessage);
formButtonGroup.insertBefore(counterContainer, formButtonGroup.firstChild);
var updateCounter = function() {
var inputText = '';
if (textarea && textarea.style.display !== 'none') {
inputText = textarea.value;
} else if (richTextEditor && richTextEditor.style.display !== 'none') {
inputText = richTextEditor.textContent;
}
var nonSpaceChars = inputText.replace(/\s/g, '').length;
var charCount = nonSpaceChars + '/5000';
if (nonSpaceChars > 5000) {
counterLabel.style.color = 'red';
counterLabel.style.fontWeight = 'bold';
warningMessage.style.display = 'inline';
} else {
counterLabel.style.color = '';
counterLabel.style.fontWeight = 'normal';
warningMessage.style.display = 'none';
}
counterLabel.textContent = charCount;
};
var handleInputChange = function() {
updateCounter();
};
if (textarea) {
textarea.addEventListener('input', handleInputChange);
}
if (richTextEditor) {
var observer = new MutationObserver(handleInputChange);
observer.observe(richTextEditor, { childList: true, characterData: true, subtree: true });
}
updateCounter(); // Call updateCounter initially
var editorModeObserver = new MutationObserver(function(mutations) {
mutations.forEach(function(mutation) {
if (mutation.type === 'attributes' && mutation.attributeName === 'class') {
updateCounter();
}
});
});
var editorContainer = document.querySelector('.fr-box');
if (editorContainer) {
editorModeObserver.observe(editorContainer, { attributes: true, attributeFilter: ['class'] });
}
}
}
}
}
addCharacterCounter();
})();
new and improved script which took longer than the original to create because of twitter's complex code.
what's new:
text from the main users{OP) tweet is extracted and put inside a spoiler tag.
the code isn't perfect because the text it extracts doesn't have properly formatted urls but i'll try to get that fixed eventually. some of the tweets must be visible when you click the bookmarklet otherwise it might not grab all the tweet links and images if it's a really long twitter thread like 10+ tweets. you might have to run the bookmarklet twice in that instance.
### you can also highlight the code and DRAG & DROP it into your browsers bookmarks toolbar.
code output example:
Code:https://twitter.com/bindureddy/status/1730248977499762740 [SPOILER="thread continued"] https://twitter.com/bindureddy/status/1730248977499762740 https://twitter.com/bindureddy/status/1730370773242757463 https://twitter.com/bindureddy/status/1730607977294627299 [/SPOILER] [SPOILER="full text"] 1/3 A day doesn't go by without a powerful open-source AI model drop. DeepSeek (67B) makes up for the open-source shortcomings - math and coding!! It beats Claude-2, scoring 65 on a National High School Exam. DeepSeek is from China and is proof that the Chinese don't need our LLM tech; they can develop their own and are enlightened enough to open-source it!! We are taking a look this week and will make it available in the Abacus AI platform next. 2/3 Can you elaborate more? 3/3 Thanks, I will dive into the details of the eval today - on first glance the model seems pretty good and is much faster than Llama-2 [/SPOILER] [SPOILER="larger images"] [img]https://pbs.twimg.com/media/GAMUJ5TbEAAE7hp.jpg[/img] [/SPOILER]
diff
- var tweets = Array.from(document.querySelectorAll(`a[href^='/${username}/status/']`));
+ var tweets = Array.from(document.querySelectorAll(`a[href^='/${username}/status/']`));
- if (!tweetUrl.includes('/analytics') && !tweetUrl.includes('/photo/') && !userTweets.includes(tweetUrl)) {
+ if (tweetUrl.match(/\/status\/\d+$/) && !userTweets.includes(tweetUrl)) {
- var tweetText = spans.map(span => span.innerText).join('');
+ var tweetText = div.innerText;
- tweetText = tweetText.replace(/https:\/\/<\/span>/g, 'https://');
+ var urls = Array.from(div.querySelectorAll('a[href]')).map(a => a.href);
+ urls.forEach(url => {
+ var truncatedUrl = url.replace(/…$/, '');
+ tweetText = tweetText.replace(url, truncatedUrl);
+ });
- tweetTexts = [firstTweetText].concat(tweetBodies.filter(tweet => userTweets.includes(tweet.url)).map(tweet => tweet.text));
+ tweetTexts = [firstTweetText].concat(tweetBodies.filter(tweet => userTweets.includes(tweet.url) && tweet.url !== userTweets[0]).map(tweet => tweet.text));
- [window.location.href, `[SPOILER="thread continued"]\n${userTweets.join('\n')}\n[/SPOILER]`, `[SPOILER="full text"]` + tweetText + '[/SPOILER]', `[SPOILER="larger images"]\n${imageUrls.join('\n')}\n[/SPOILER]`].join('\n');
+ [window.location.href, `[SPOILER="thread continued"]\n${userTweets.slice(1).join('\n')}\n[/SPOILER]`, `[SPOILER="full text"]` + tweetText + '[/SPOILER]', `[SPOILER="larger images"]\n${imageUrls.join('\n')}\n[/SPOILER]`].join('\n');
[U][URL]
and [/URL][/U]
. It also removes any trailing ellipsis (`...`) from the URLs.[SPOILER="thread continued"
section, as it's already present in the main URL.https://twitter.com/matei_zaharia/status/1772972271721763199
[SPOILER="thread continued"]
https://twitter.com/matei_zaharia/status/1772972274053701799
https://twitter.com/matei_zaharia/status/1772972275899228255
https://twitter.com/matei_zaharia/status/1772974673816346840
[/SPOILER]
[SPOILER="full text"]
1/5
At Databricks, we've built an awesome model training and tuning stack. We now used it to release DBRX, the best open source LLM on standard benchmarks to date, exceeding GPT-3.5 while running 2x faster than Llama-70B. https://databricks.com/blog/introducing-dbrx-new-state-art-open-llm…
2/5
DBRX is a 132B parameter MoE model with 36B active params and fine-grained (4-of-16) sparsity and 32K context. It was trained using the dropless MoE approach pioneered by my student
@Tgale96
in MegaBlocks. Technical details: [U][URL]https://databricks.com/blog/introducing-dbrx-new-state-art-open-llm[/URL][/U]
3/5
You can try DBRX on Databricks model serving and playground today, with an OpenAI-compatible API! And if you want to tune, RLHF or train your own model, we have everything we needed to build this from scratch.
4/5
Lots more details on how this was built and why our latest training stack makes it far more efficient than what we could do even a year ago:
5/5
Meet DBRX, a new sota open llm from @databricks. It's a 132B MoE with 36B active params trained from scratch on 12T tokens. It sets a new bar on all the standard benchmarks, and - as an MoE - inference is blazingly fast. Simply put, it's the model your data has been waiting for.
[/SPOILER]
[SPOILER="larger images"]
[img]https://pbs.twimg.com/media/GJrISbEa4AAsTJz.jpg[/img]
[/SPOILER]
body = body.replace(/https?:\/\/[^ ]+…/g, function(url) {
return url.replace(/…$/, '');
});
https://twitter.com/matei_zaharia/status/1772972271721763199
[SPOILER="thread continued"]
https://twitter.com/matei_zaharia/status/1772972274053701799
https://twitter.com/matei_zaharia/status/1772972275899228255
https://twitter.com/matei_zaharia/status/1772974673816346840
[/SPOILER]
[SPOILER="full text"]
1/5
At Databricks, we've built an awesome model training and tuning stack. We now used it to release DBRX, the best open source LLM on standard benchmarks to date, exceeding GPT-3.5 while running 2x faster than Llama-70B. https://databricks.com/blog/introducing-dbrx-new-state-art-open-llm
2/5
DBRX is a 132B parameter MoE model with 36B active params and fine-grained (4-of-16) sparsity and 32K context. It was trained using the dropless MoE approach pioneered by my student
@Tgale96 in MegaBlocks. Technical details: [U][URL]https://databricks.com/blog/introducing-dbrx-new-state-art-open-llm[/URL][/U]
3/5
You can try DBRX on Databricks model serving and playground today, with an OpenAI-compatible API! And if you want to tune, RLHF or train your own model, we have everything we needed to build this from scratch.
4/5
Lots more details on how this was built and why our latest training stack makes it far more efficient than what we could do even a year ago:
5/5
Meet DBRX, a new sota open llm from @databricks. It's a 132B MoE with 36B active params trained from scratch on 12T tokens. It sets a new bar on all the standard benchmarks, and - as an MoE - inference is blazingly fast. Simply put, it's the model your data has been waiting for.
[/SPOILER]
[SPOILER="larger images"]
[img]https://pbs.twimg.com/media/GJrISbEa4AAsTJz.jpg[/img]
[/SPOILER]