0

Based on this wiki article https://wiki.archlinux.org/index.php/Advanced_traffic_control It appears that I can configure the Linux traffic controller, which seems to be a subset of the Linux Network Emulator, to change its queuing disciplines. So far, I have gathered that I can change various aspects of how the virtual network is emulated like delay, packet loss/corruption, packet reordering, and bandwidth capacity. This is all great, but I am wondering if I can specifically alter the linux traffic controller qdisc. By that I mean, it currently defaults to FIFO, does it offer other queuing disciplines like Shortest-Job-First(SJF), Random, Preemptive-Shortest-Job-First(PSJF), Shortest-Remaining-Processing-Time(SRPT), etc? My reasoning for this is that I want to enable the framework CORE https://github.com/coreemu/core to utilize various queuing disciplines beyond what it offers (FIFO, WFQ, DRR), and in CORE the queuing disciplines are specified by the Linux Network Emulator.

I apologize for this lengthy questions and hope someone can help.

6
  • In TC world there's already a "Network Emulator" qdisc called netem (see netem). This network emulator is a part (so a subset) of TC. Not the other way around. So are you talking about this network emulator, or an other one? Also are the algorithms you're talking about intended for network, or for CPU scheduling?
    – A.B
    Jan 21, 2020 at 18:20
  • Anyway mandatory link: tldp.org/HOWTO/Adv-Routing-HOWTO/lartc.qdisc.html
    – A.B
    Jan 21, 2020 at 18:31
  • Thank you for the reply. I will start reading the documentation now. To answer your first question, according to the CORE architecture documentation, "Link characteristics are applied using Linux Netem queuing disciplines." I interpret that to mean that nodes in CORE, which are Linux Network Namespaces, have their queuing discipline specified by the Linux Netem, so that is the network emulator I believe I am talking about. To your second question, this will be for network scheduling on a virtual network created in CORE(really created using linux nn, linux Ethernet briding, and linux netem) Jan 23, 2020 at 18:14
  • Here is the CORE documentation if that helps at all, coreemu.github.io/core/architecture.html Jan 23, 2020 at 18:18
  • It appears the network part is handled by the kernel which has a limited set of network qdiscs, none of them in the list you're showing. Looking on internet your list appears to be intended to schedule cpu jobs, not network packets queues (one can imagine it could be applied, but there's no such kernel module)
    – A.B
    Jan 23, 2020 at 19:58

0

You must log in to answer this question.

Browse other questions tagged .