mirror of
https://github.com/debauchee/barrier.git
synced 2026-05-15 14:16:02 -06:00
[GH-ISSUE #147] High memory usage - memory leak #120
Labels
No labels
HiDPI
bounty
bsd/freebsd
bsd/openbsd
bug
bug
build-infra
cantfix
critical
doc
duplicate
enhancement
fix-available
from git
from release
good first issue
help wanted
installer/package
invalid
linux
macOS
meta
needs testing
pull-request
query
question
regression
regression
v2.4.0
windows
wontfix
work-in-progress
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference: github-starred/barrier#120
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @maxbla on GitHub (Oct 8, 2018).
Original GitHub issue: https://github.com/debauchee/barrier/issues/147
Operating Systems
Server: Arch
Client: Arch
Barrier Version
2.1.0-Release000000
Steps to reproduce bug
Other info
Is this a known issue with release 2.1.0? If it isn't, I'll dig in to the code.
For now I'll upgrade barrier to 2.1.1(that's an AUR version number, but barrier is still version 2.1) and monitor.This pmap output seems to suggest that lots of memory is malloc'd but not free'd, since anon comes from malloc and mmap
pmap output:
@maxbla commented on GitHub (Oct 9, 2018):
I made a debug build by changing
-D CMAKE_BUILD_TYPE:STRING=Relaeseto-D CMAKE_BUILD_TYPE:STRING=Debugin the pkgbuild. I also addedset (CMAKE_BUILD_TYPE Debug)insrc/barrier-2.1.1/CMakeLists.txt, but I don't think that had any effect.I'm having trouble debugging this program. Valgrind gives an error
valgrind: m_debuginfo/debuginfo.c:453 (discard_or_archive_DebugInfo): Assertion 'is_DebugInfo_active(di)' failed.Starting from main(), my debugger gets stuck at app.exec(). App is a QBarrierApplication, QBarrierApplication extends QApplication (from qt), and exec starts a blocking event loop that I can't step in.What should I do?
@p12tic commented on GitHub (Oct 13, 2018):
Does this happen on the client or on the server or on both? Myself I haven't ever noticed high memory usage on a barrier server running on a mac, but this could be platform-specific issue.
@datgame commented on GitHub (Feb 11, 2019):
I had the same problem on Windows 10. left the same release version running over the weekend. the computer wasn't actively in use.
In 3 days it was using 16gb of ram. (out of my 32gb).
You can see barriers.exe in procexp.exe constantly ticking up when the PC is idling. about 100kbyte/second.
edit: got it on linux now too after 30 days of uptime. had to kill both barrier and barrierc, as it was using 2.5 gb ram and 100% cpu.
sadly hard to debug, as the pc had run out of memory and wasn't responsive
edit2: the 2 PCs above are client/server, so both suffer from leaking.
@jtara commented on GitHub (Feb 22, 2019):
MacOS as well. It's the server, not the client.
May or may not be related, but I notice the client gets really slow, and pops up the "pasting" window for longer and longer length of time, until you stop and restart the server.
On my iMac Pro server it eats 64GB in a day. It doesn't seem to ever eat more than total physical memory (I have 64GB). MacOS does not crash, of course it swaps. Something seems to stop it at physical memory limit and so doesn't eat further virtual memory.
@jdorner4 commented on GitHub (Jun 4, 2019):
I'm having this same issue with Ubuntu 18.04.2 and Barrier 2.2.0-snapshot-00000000
Over a couple of hours the memory usage for barriers creeps up to 300MB by the end of the day, it will be over 1GB if I don't stop/quit/kill Barrier.
I have noticed that when the memory use gets high the pointer on the client screen is real jittery - quitting and restarting Barrier fixes the problem.
I usually have to stop/restart Barrier a couple times a day due to this issue.
Please let me know if there are any log files or anything I can do to help you debug/solve this issue.
@noisyshape commented on GitHub (Jun 4, 2019):
There's a fix in Synergy for a TLS memory leak that isn't in Barrier yet. Can anyone disable SSL to see if the problem persists?
@jdorner4 commented on GitHub (Jun 5, 2019):
I disabled TLS on both the server and client. This stopped the memory leak (or greatly reduced it to an acceptable level) for the barriers process. The barrier process' memory use is still growing, but also greatly reduced to an acceptable level.
@tallero commented on GitHub (Aug 6, 2019):
On my computer it happens that after some time being connected the cursor become completely unresponsive on client side.
I have to kill the process from another X session or from TTY. In general when this happens barrier uses 20% of CPU and a very high amount of RAM.
@nikola3244 commented on GitHub (Jan 3, 2020):
This issue is still present on 2.3.2-RELEASE-00000000 (January 1, 2020)
After 1 day on the server the RAM usage went over 9GB, then the system started swapping so I had to kill
barriersvia TTY.I did not check the client RAM usage before I killed it as well.
I thought that this might have something to do with sudden loss of connection and/or having the laptop (server) being put to sleep, but the RAM usage stays the same no matter what I try.
Client and server are both laptops using Arch with KDE.
@datgame commented on GitHub (Jan 3, 2020):
my workaround was to start using remote desktop instead :-)
the crazy ram usage and the problems with international keys and stuck mouse and ctrl/shift/win keys got too much.
@zocker-160 commented on GitHub (Feb 28, 2020):
I can confirm this issue and I think, that it is the worst, when one of the clients either goes to sleep or the session is locked
my RAM usage goes over 21GB in just a few hours

EDIT: version 2.3.2-snapshot-0000....
@MathyV commented on GitHub (Mar 4, 2020):
I can confirm this issue, barrier just grows until the complete system crashes. Which is a hard thing to do on my machine. Also the first time I saw a process consume 128G of memory in my life :-) It seems the memory leak starts whenever the opposing side (in my case a Windows 10 machine) disconnects. If I have some time I'll try to help debug it.
@galkinvv commented on GitHub (May 6, 2020):
It looks to be duplicate of #470
I hope it was fixed in master with merging #557
@mosteo commented on GitHub (Sep 30, 2020):
Just another data point, seeing apparent slow creep-up of memory use with
barrier 2.3.3-release-bbd1accb, on Ubuntu 18.04:Swap looks anomalously high.
@galkinvv commented on GitHub (Sep 30, 2020):
@mosteo Thanks for yupur report. While the issue you described is similar - it seems to be unrelated.
The issue described here is about leaking background
barriers/barriercapplications, and you mentionbarrier- a gui app for viewing status/configuration.It is much less optimized (but of course shouldn't leak).
So if
barrierapp is leaking (the memory usage is increasing) - feel free to report a different issue.@mosteo commented on GitHub (Sep 30, 2020):
I see, sorry for the confusion.
@Thomas131 commented on GitHub (Feb 9, 2021):
Edit: Sorry, my leak was related to https://github.com/debauchee/barrier/issues/470#issuecomment-567496748 and seems to have been fixed by #557 . I will try to get a new version and this will fix the problem. Sorry for not doing enough research before commenting.
Hi!
I also just experienced a OOM kill on barriers Server on LM20.
Hours prior I had to restart the server and client since the IPs changed (probably restarting the client should have been enough, but I misstyped the new IP). Then I used the W10 Client using the HID-Devices from the LM20-Machine. After 2.5-3 Hours, I shut down the W10 Client (21:57:xx) and undocked the LM20 Laptop (Server) from the Docking-Station (21:59:03). Some minutes later, my laptop freezed (before 22:08:00) which ultimately lead to an automated OOM kill (22:18:17).
Maybe this helps somebody ... I think my first action will be to tell my OOM-killer to kill earlier ...
@mirh commented on GitHub (Feb 10, 2021):
Of course this is still a thing if no new release happened in the last half a year.
You should check git before re-reporting.
@Thomas131 commented on GitHub (Feb 10, 2021):
I wasn't expecting it to be miraculously closed. I was more trying to give debug logs.
It appears that mine was fixed with #557 . I will try to get a newer Version. Sorry for the inconvenience!