FALSE ALARM, CULPRIT FOUND: I had a MalwareBytes browser extension that slowed the emote picker down. The extension has been purged and now both the popup and the inline picker are working well.
The original post before I did testing
I try to
in a Hexbear comment. The picture takes forever to even load in the picker so it takes a while to see if I’ve even selected the right emote. Most of the emotes in the list I can’t even see the pictures unless I step away for a minute and come back after they’ve all loaded. I hit submit comment. It takes forever to post. I can hear the fans whirring. If I’m streaming vid on blorptube while posting something from the emote picker, sometimes the video will just pause and stop loading until whatever process the emotes have kicked off completes. I don’t know why some emotes are visible while others aren’t it doesn’t appear to be related to order that they show up in the list
I think “Well I’m on the ESR of Firefox, maybe it’s just old” but then I boot into Linux with cutting-edge up to date Firefox and it still behaves largely the same.
I have wondered if it’s an internet speed thing but it seems to behave this way whether I’m on ethernet or 802.11n wireless, whether or not my VPN is on.
It also takes quite some time for emotes to load in the emote picker on Blorp, come to think of it.
I’ve got 16GB of RAM, that should be plenty. Yes I know the web has gotten more intensive since this computer was made 13 years ago but come on, compared to hell sites like twitter, facebook, and new.reddit, lemmy seems downright lightweight. NoScript and other blockers aren’t blocking a ton of bloat here.
I’m not criticizing the unsung heroic devs of Hexbear. I’m no good at web dev, hated it when I had to do it for work, and couldn’t fix or optimize this either. But I would like to understand what the root cause of this problem is and
(If the answer is “upgrade your hardware or live with it” then I’m living with it, because my precious little computer is my buddy and I’m not abandoning it)


@[email protected], @[email protected] You can find the repo that contains all the emoji here: https://github.com/Hexbear-Code-Op/hexbear-emotes and I think there is a full size and a reduced size folder contained within. You might want to look at imagemagick for bulk image conversion and resizing.
Though, it might be more that @[email protected] needs to literally log in and manually re upload smaller versions of the emoji through this interface:
(this is from my instance)
And that interface is not great. Looking at it right now, I don’t even think there is an option to “reupload” an emoji; you might have to delete it and re-add it, which I think is what the actual time crunch would be.
Now, all that said, if you wanted to go fucking ham on this issue, you could use the CustomEmoji endpoint: https://lemmy.readme.io/reference/put_custom-emoji to automate the process of Fetching, then Deleting, then recreating the entire emoji list. You would need to have the Emoji hosted somewhere on the internet for it to work, but, that could be the GitHub repo. I assume when you provide it the image URL it automatically pulls the image from the remote location to the Lemmy server.
So, in theory, there is a tool waiting to be built that would allow you to not only point your instance at an emoji GitHub repo and import it, but periodically sync the emoji from GitHub itself. Obviously, that’s outside the scope of the issue here. What is needed is just a script that can replace the existing emoji with smaller versions, retaining their short code and other data.
Edit:
If you wanted to get a JSON object of our big, beautiful emoji list, it’s here: https://hexbear.net/api/v3/site look under “custom_emoji”. (2749 objects! Woof!)
I will likely have a chance to look at this in more detail on Sunday, thank you for getting us started!
Question: Do we have a good way to test what is even causing the bottleneck? Do we know it’s emoji size, or is it the sheer # of emojis? Asking because, for example, it takes longer to cp 1GB of very small files than it does to cp a single 1GB file, because the file open/close operations take up the bulk of the processing power.
Loading is slowed down by each emoji being it’s own HTTP call.
In that case it sounds like # of emojis rather than size of emojis is at least one bottleneck.
Wonder if there’s a way to do a profiler test of the performance. There must be but I am pulling quarter-understood concepts out from beyond the veil of memory
Yeah there are like 2700+ emoji. The emoji is sectioned, if you could gate what is loaded by its section it could speed things up. Load the emoji in batches by section basically. It is like 2 gig of emoji at full resolution. There are a lot of gifs that could be converted to apng (animated PNG) to reduce the size.
I wonder if HTTP/2 would help.
We already use it
Oh, I misread, firefox reports HTTP/1.1 for cached responses even if the original req was HTTP/2 it appears