I have a large megabrain of about 100k thoughts. I've noticed significant lags and freezes on thought creation and activation for several years. I've tried to seek a solution over the past two years to the current lags but have been unable to get enough support attention on to it as I'm unwilling to send the entire dataset to the company. The lags can often be associated with high disk activity, but not always.
Ideally, there should be a diagnostic utility that allows users to identify corruptions within their brain dataset without having to compromise security of that information. As it stands I'm not certain whether the freezes are due to a corruption within the dataset or an systemic gap in performance coding. I suspect that in my case, it is a combination of both. There is possibly a corruption in the dataset occurring from the same problem Zenrain experienced in 2011 (i am unable to make uncorrupted brainzips anymore and so never use them), plus there appears to be significant delays in TheBrain software accessing complex mapping sets. having said that, when it does occasionally perform as it should, the response is instantaneous, so the optimization code is there, but rarely achieved for some non-systematic reason. You might want to make some brainzips and check for any corruptions, misaligned thoughts etc in the unzipped version. That will provide a clue whether there is a structural problem in the dataset that might be related to the delays. Since I was not able to achieve a solution I can't say whether the issues are related. The problem also comes and goes depending on the version. I raised this issue several times before (http://forums.thebrain.com/post/show_single_post?pid=1271214372&postcount=8) and as of 8006 the freezes and lags occur with just about every thought activation or creation. It also locks up and crashes at least once per day. I had wondered whether the lag was a hardware burden, but upgrading to a quadcore laptop with discreet graphics and 16GB RAM didn't make a difference. Also the disk is a SSHD with embedded cache with perpetual defrag on the platters so that is unlikely to be the issue. In the past, TheBrain team has made every effort to ensure that performance and scalability were maintained along with features and capability. Apparently, the only way to really determine the integrity of TheBrain files is to send your entire dataset to the developers so they can go through it. Whilst this is a genuine offer and endemic of the team's excellent customer support over countless years, it is not a workable approach. It is also possible that the current poor performance is a function of the expansion of the ecosystem with the mobile versions coming online. In that case optimization or additional user self-repair tools may be pushed back for the next year or two until the new system is in place, which also makes sense (no point optimizing in mid-development). Perhaps the Rebuild Database or Overhaul Database functions address a self-repairing or optimizing capability, but I haven't found the documentation describing precisely what occurs with these procedures. Despite the lags, it is still very usable and as long as i avoid brainzips, there is no data corruption. This may be the best scenario trade-off until a solution is developed. Jim
TB8022 32bit Java 32bit Version 8 Update 141 Firefox, Office 2013 Pro Plus 32bit 64bit Win10Pro 64bit Primary Laptop, 8GB RAM, Intel Core i7 64bit Secondary Laptop, 64GB RAM, Intel Xeon E3 Brain user since zygote