" -1]
:global currentIP [:pick $result $startLoc $endLoc]
:log info "DNSoMatic: IP actual $currentIP"
# Touching the string passed to fetch command on "src-path" option
:local str "/nic/update?hostname=$matichost&myip=$currentIP&wildcard=NOCHG&mx=NOCHG&backmx=NOCHG"
:if ($currentIP != $previousIP) do={
:log info "DNSoMatic: Update need"
:set previousIP $currentIP
:log info "DNSoMatic: Sending update $currentIP"
:日志信息:把[/工具获取主机用户= $ maticu = MTser password=$maticpass mode=http address="updates.dnsomatic.com" src-path=$str dst-path=$matichostp]]
:log info "DNSoMatic: Host $matichost updated on DNSoMatic with IP $currentIP"
} else={
:log info "DNSoMatic: Previous IP $previousIP and current $currentIP equal, no update need"
}
This is logical but maybe not precise. Someone can confirm this? Is this true at least for Linux?...The load on the CPU of the router can be sensed by how quickly the ping is returned it turns out....
The main router ping time increases as a result of the virtualization are extremely minor. They look dramatic in the chart, but a increase of .5ms is really minor.And for some operations that introduces quite an overhead. But that should be expected. On the other hand, what good is hardware if it runs with 10 - 20% load, if adding guest brings it up to 40 - 50% costs increase only up to maximum 10% if that much at all.
Thats why I have done a weeks worth of testing... None of the noise or steady increases appears in the tests until I activate the Metarouter. I have 24 hours of flat line charts. .1ms full scale. Just black flat line almost.... This eliminates all other sources of unwanted interference. I am connected with everything gigabit with good 3 foot cables.Oh wait. Those .05ms increases may be due to a process on the host that sent the ping requests. Or due to network activity along the way.
Hehehe,,, The switch chip's power supply is already capped. I stuffed EVEN MORE on the router last night. Im SURE its not power supply. I have 14400uF on the 3V supply and 12200uF on the 1.2V supply. No WAY there is a power issue now.. I think you could start a fusion reactor with all that reserve current avaliable. Not to mention the supplies must have ZERO noise..Interesting thread, Xymox. Have you tried to use different ports? It seems ports 2-5 are different than port 1. On top of that this post describes packet loss on ports 3-5: viewtopic.php?f=3&t=40798&p=204127&hilit=rb450g+rb750g#p204127 so there could be a difference between 2 and 3-5???
There's also switch-all-ports=yes/no command that you might want to test with. For example I would try to switch-all-ports=no and connect VM to port 1 only. Check for stability, than attach it to port 5 only and test for stability again.
Maybe the switch chip needs a cap too
Janisk: I just hope something helps to find the problem.. Actually I love this kind of thing. Its like a puzzleAnyway, thanks for your findings.
/interface virtual-ethernet add name=mr-WAN /interface virtual-ethernet add name=mr-LAN /interface bridge port add bridge=WAN-bridge interface=mr-WAN /interface bridge port add bridge=MetaBridge interface=mr-LAN /metarouter interface set 0 static-interface=mr-WAN /metarouter interface set 1 static-interface=mr-LAN
routing table flush maybe?Is there something in the router that runs every 15 minutes ? I think I might see a pattern to the latency bumps. The spikes always occur during a latency bump.
I think you mentioned that you are running with switch-all-ports=no. Have you tried switch-all-ports=yes? This will make eth1 part of switchgroup (as are eth2-5) I believe.
It appears that if you reboot the router and then connect ONLY ether2 ( lan ) and run multiping there are no latency bumps or hangs UNTIL you connect Ether1.
yep, NTP client poll interval during normal work is 900s = 15 minutes =)Is there something in the router that runs every 15 minutes ? I think I might see a pattern to the latency bumps. The spikes always occur during a latency bump.
Timekeeping in Linux has changed a great deal over its history. Recently, the direction of kernel development has been toward better behavior in a virtual machine. However, along the way, a number of kernels have had specific bugs that are strongly exposed when run in a virtual machine. Some kernels have very high interrupt rates, resulting in poor timekeeping performance and imposing excessive host load even when the virtual machine is idle.