Telegram Group & Telegram Channel
transformers-october-2024.png
2 MB
tasty transformer papers | october 2024
[2/4]

Differential Transformer
what: small modification for self attention mechanism.
- focuses on the most important information, ignoring unnecessary details.
- it does this by subtracting one attention map from another to remove "noise."
link: https://arxiv.org/abs/2410.05258

Pixtral-12B
what: good multimodal model with simple arch.
- Vision Encoder with ROPE-2D: Handles any image resolution/aspect ratio natively.
- Break Tokens: Separates image rows for flexible aspect ratios.
- Sequence Packing: Batch-processes images with block-diagonal masks, no info “leaks.”
link: https://arxiv.org/abs/2410.07073

Fluid: Scaling Autoregressive Text-to-image Generative Models with Continuous Tokens
what: maskGIT with continual tokens.
- get vae with quantized loss but do not use quantization in decoder ( stable diffusion)
- propose BERT-like model to generate in random-order.
- ablation shows that bert-like better than gpt-like for images(tbh small improvements)
link: https://arxiv.org/abs/2410.13863

UniMTS: Unified Pre-training for Motion Time Series
what: one model to handle different device positions, orientations, and activity types.
- use graph convolution encoder to work with all devices
- contrastive learning with text from LLMs to “get” motion context.
- rotation-invariance: doesn’t care about device angle.
link: https://arxiv.org/abs/2410.19818

my thoughts

I'm really impressed with the Differential Transformer metrics. They made such a simple and clear modification. Basically, they let the neural network find not only the most similar tokens but also the irrelevant ones. Then they subtract one from the other to get exactly what's needed.

This approach could really boost brain signal processing. After all, brain activity contains lots of unnecessary information, and filtering it out would be super helpful. So it looks promising.

Mistral has really nailed how to build and explain models. Clear, brief, super understandable. They removed everything unnecessary, kept just what's needed, and got better results. The simpler, the better!



group-telegram.com/neural_cell/202
Create:
Last Update:

tasty transformer papers | october 2024
[2/4]

Differential Transformer
what: small modification for self attention mechanism.
- focuses on the most important information, ignoring unnecessary details.
- it does this by subtracting one attention map from another to remove "noise."
link: https://arxiv.org/abs/2410.05258

Pixtral-12B
what: good multimodal model with simple arch.
- Vision Encoder with ROPE-2D: Handles any image resolution/aspect ratio natively.
- Break Tokens: Separates image rows for flexible aspect ratios.
- Sequence Packing: Batch-processes images with block-diagonal masks, no info “leaks.”
link: https://arxiv.org/abs/2410.07073

Fluid: Scaling Autoregressive Text-to-image Generative Models with Continuous Tokens
what: maskGIT with continual tokens.
- get vae with quantized loss but do not use quantization in decoder ( stable diffusion)
- propose BERT-like model to generate in random-order.
- ablation shows that bert-like better than gpt-like for images(tbh small improvements)
link: https://arxiv.org/abs/2410.13863

UniMTS: Unified Pre-training for Motion Time Series
what: one model to handle different device positions, orientations, and activity types.
- use graph convolution encoder to work with all devices
- contrastive learning with text from LLMs to “get” motion context.
- rotation-invariance: doesn’t care about device angle.
link: https://arxiv.org/abs/2410.19818

my thoughts

I'm really impressed with the Differential Transformer metrics. They made such a simple and clear modification. Basically, they let the neural network find not only the most similar tokens but also the irrelevant ones. Then they subtract one from the other to get exactly what's needed.

This approach could really boost brain signal processing. After all, brain activity contains lots of unnecessary information, and filtering it out would be super helpful. So it looks promising.

Mistral has really nailed how to build and explain models. Clear, brief, super understandable. They removed everything unnecessary, kept just what's needed, and got better results. The simpler, the better!

BY the last neural cell


Warning: Undefined variable $i in /var/www/group-telegram/post.php on line 260

Share with your friend now:
group-telegram.com/neural_cell/202

View MORE
Open in Telegram


Telegram | DID YOU KNOW?

Date: |

In addition, Telegram now supports the use of third-party streaming tools like OBS Studio and XSplit to broadcast live video, allowing users to add overlays and multi-screen layouts for a more professional look. Artem Kliuchnikov and his family fled Ukraine just days before the Russian invasion. These entities are reportedly operating nine Telegram channels with more than five million subscribers to whom they were making recommendations on selected listed scrips. Such recommendations induced the investors to deal in the said scrips, thereby creating artificial volume and price rise. The next bit isn’t clear, but Durov reportedly claimed that his resignation, dated March 21st, was an April Fools’ prank. TechCrunch implies that it was a matter of principle, but it’s hard to be clear on the wheres, whos and whys. Similarly, on April 17th, the Moscow Times quoted Durov as saying that he quit the company after being pressured to reveal account details about Ukrainians protesting the then-president Viktor Yanukovych. However, the perpetrators of such frauds are now adopting new methods and technologies to defraud the investors.
from it


Telegram the last neural cell
FROM American