Thursday, October 07, 2021

Creating Windows/Linux bootable USB in Mac OSX from ISO image (command line)



1. Locate the USB disk you are using. My example (/dev/disk4) is shown below.

# diskutil list
...
/dev/disk4 (external, physical):
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:     FDisk_partition_scheme                        *61.5 GB    disk4
   1:                 DOS_FAT_32 ⁨USB64G⁩                  61.5 GB    disk4s1

2. Erase and partition the USB disk.

# diskutil partitionDisk /dev/disk5 GPT "Free Space" x 0

3. Unmount the USB disk.

# unmountDisk /dev/disk4

5. Convert ISO image to DMG image.

# hdiutil convert -format UDRW -o ubuntu ubuntu-20.04.3-desktop-amd64.iso

6. Create the bootable USB disk.

# sudo dd if=ubuntu.dmg of=/dev/disk4 bs=1m




Monday, April 29, 2019

Centos warning: setlocale: LC_CTYPE: cannot change locale (UTF-8): No such file or directory

Add the following lines

LANG=en_US.utf-8
LC_ALL=en_US.utf-8

in file

/etc/environment

Thursday, March 21, 2019

Change network interface name from enpxxx to ethX

Tags: CentOS, Consistent network device naming, GRUB
----------

The CentOS 7.X use the consistent network device naming scheme by default. That make the Ethernet interface appearing as enpxxx.

Somebody and some old tools do not like it. For example, some license daemons for EDA tools require the HostID as the MAC address of the ethX interface.

To switch it back to the old ethX naming scheme:

1. Disable the NetworkManager
# systemctl disable NetworkManager

2. Update the boot option in the /etc/default/grub file. Locate the GRUB_CMDLINE_LINUX line and then insert the "net.ifnames=0 biosdevname=0" in it. For example:
GRUB_CMDLINE_LINUX="rd.lvm.lv=centos_bhem/root rd.lvm.lv=centos_bhem/swap net.ifnames=0 biosdevname=0 rhgb quiet"

Note: Make sure the GRUB_CMDLINE_LINUX line is a single line. No line break is allowed.

3. Make the boot option effective.
For BIOS boot:
# grub2-mkconfig -o /boot/grub2/grub.cfg
For UEFI boot:
# grub2-mkconfig -o /boot/efi/EFI/centos/grub.cfg

4. Create configuration for the interface. For example:
# cd /etc/sysconfig/network-scripts/
# mv ifcfg-enp2s0 ifcfg-eth0
Then update the ifcfg-eth0 file, replacing 'enp2s0' by 'eth0'. For example:
TYPE="Ethernet"
PROXY_METHOD="none"
BROWSER_ONLY="no"
BOOTPROTO="dhcp"
DEFROUTE="yes"
IPV4_FAILURE_FATAL="no"
IPV6INIT="yes"
IPV6_AUTOCONF="yes"
IPV6_DEFROUTE="yes"
IPV6_FAILURE_FATAL="no"
IPV6_ADDR_GEN_MODE="stable-privacy"
NAME="eth0"
UUID="a062bfa4-8820-4d64-bdeb-28b3f5052fce"
DEVICE="eth0"
ONBOOT="yes"

5. Reboot


Saturday, August 12, 2017

Speed test of file copy


Speed test of file copy: 1 GB random data file.

Linux:
2.7 GHz Intel Core i5-5257U
4 GB 1333 MHz DDR3
Realtek RTL8111 PCIe GbETH
CentOS 7

local copy: 6~9 sec, typical 8.5 sec ==> ~100 MBps
copy from SSD to USB: typical 20~21 sec ==> ~50 MBps
copy from USB to SSD: typical 5~10 sec ==> ~100 MBps


Mac:
MacBook 12-inch 2016
1.1 GHz Intel Core m3
8 GB 1867 MHz LPDDR3
macOS Sierra 10.12.6
5 GHz wireless network

SCP from Linux SSD to Mac: 20 sec ==> ~50 MBps
SCP from Mac to Linux SSD: 55~65 sec ==> ~16 MBps
SCP from Linux USB to Mac: 20 sec ==> ~50 MBps
SCP from Mac to Linux USB: 55~65 sec ==> ~16 MBps

Attacks on my home server

This is what my home server experienced in two days ... strangely I did not announce the my IP in anyway. The following are only attacks on SSH as that is the only port I forwarded. Have not checked the router log yet. It is a jar of worms I don't want to touch.

From: Aug 7 03:38:02
To: Aug 9 14:03:27
Total attacks: 185674
Unique IPs: 112
Top 10 IPs: (by number of login attempts)
85368 61.177.172.--
46557 116.31.116.--
16630 116.31.116.--
16098 58.242.83.--
10224 59.63.166.--
4587 61.177.172.--
1866 218.87.109.--
758 103.58.116.--
634 162.243.39.--
498 60.165.208.--
Total countries: 31
Top 10 countries: (by number of unique IPs)
35 CN
12 AR
7 US
7 KR
5 BR
4 RU
4 DE
3 SE
3 FR
3 EC
IPs tried valid user names: 86
Total valid user names: 10
Top 10 valid user names:
183372 root
13 nobody
13 bin
10 ftp
9 adm
7 operator
5 sshd
5 daemon
4 transmission
2 rpc
IPs tried invalid user names: 92
Total invalid user names: 317
Top 10 invalid user names:
646 admin
170 postgres
89 odoo
62 backup
55 pi
50 support
47 usuario
40 ubnt
33 service
33 oracle
Top 10 info:
"61.177.172.--", "Nanjing", "Jiangsu", "CN",
"116.31.116.--", "Shenzhen", "Guangdong", "CN",
"116.31.116.--", "Shenzhen", "Guangdong", "CN",
"58.242.83.--", "Hefei", "Anhui", "CN",
"59.63.166.--", "Nanchang", "Jiangxi", "CN",
"61.177.172.--", "Nanjing", "Jiangsu", "CN",
"218.87.109.--", "Nanchang", "Jiangxi", "CN",
"103.58.116.--", "Namakkal", "Tamil Nadu", "IN",
"162.243.39.--", "New York", "New York", "US",
"60.165.208.--", "Lanzhou", "Gansu", "CN",

Thursday, April 27, 2017

Release Wednesday

How to do continuous integration for hardware design -- a story of release Wednesday.

Background

Recently I come across this issue of performing continuous integration (CI) in hardware development flow (mostly RTL front end design). There are discussions of the tools and framework for this tasks. Most of the available tools are for software development. Thus it is not easy to chose one.

I am more concern of adapting the approach (or the common practices) of software CI flow to the hardware dev environment. The company I worked with has this software CI infrastructure in place for the RTL projects. Despite the large efforts put in it, the system did not generate many significant results (at lot less then what I expected). Without the incentives to maintain the system, it die a slow death.

We can spend a long day to discuss the differences between hardware and software development and be very academic about it. But what I want to present is another story happened in the same company.

Wednesday

It was in the early years and the concept of CI is not that popular (at least not in that hardware company). Projects still have the same need to regularly check the health of the code base. Without any fancy tools, we come up the the idea of release Wednesday.

Every Wednesday, the project technical lead (this is a role, not a title) will check with the engineers to see what can be (and should be, following the project schedule) ready for verification and integration. Then the technical lead will actively pull the modules in the main branch and allow the integration to produce a coherent and consist copy of the project code base. Once the release is process, nobody to mess with the revision control system until released.

It was a stress day. For the module developers, it is the time they have to present their work to the team. All the sloppy work, every stupid bugs and any schedule slip will be transparent to the whole team. For the technical lead, it is the most busy day. All conflicts between modules has to be resolved to prevent further diverse between modules. A lot of decisions (e.g. which feature to be integrated first, which conflicts can be grouped while resolving, when to notify the verification team for the quick tests) must be made to deliver the release. For the verification team, the pressure is on the quick turn around time. The release candidate (RC) is usually available mid-day. Then a battery of tests is run to assure the quality of the RC. There may be several iterations and thus several RC to be checked.

Work as a team is very much the reality of this day as everyone must focus on the release and standby for unexpected work (or rework).

Achievements

After the release, everyone can start working on a common code base which is known to work.

The verification team will check if the previously reported bugs/issues are resolved; if the planned features are integrated and functioning; if the there are any new bugs/issues introduced.

The front end developers can start working on new features. If their work depend on the features from others, this is a good time to check if it is presented and then plan the development work accordingly.

The technical lead has a very good understanding about the status of the code and the progress of the project. It is always a refresh of the functional and implementation specifications while integrating and resolving the integration conflicts.

Base on the reports from the technical lead, the project manager can then review the project schedule and plan the resources better for next week.

The end result of this Wednesday release is not different from the result of an automatic CI tool. The real benefit is the impact on the team members (i.e. the human). The difference is between waiting an machine generated integration/regress reports and involving actively in the integration progress. The exposure to the full project scale integration and focus of resolving issues with other teammates are the most valuable part. In my experience, everyone in the team get to know the project better each time and the release Wednesday is pretty smooth after the team gets used to the integration process.

There are very little noise generated in this approach when comparing to the automatic CI process. All involved persons are happier in this way during the project.

Notes

While adapting this approach, there are a few notes should be taken.

  1. The involved team size should not be too large. My experience is with a team across UK and India with less than 10 people. The actual number of engineers contributed to the code may be larger but they can select a representative person for this task (a single voice for a sub-team).
  2. The technical lead must have some degree of authority to make decisions impacting the project schedule. He/She also need to have very good knowledge in both specification requirements and the practical work of front-end development.
  3. The members of the team need to be self-motivated and understand the reasons behind this approach. They have to be in standby mode for the day and actively help resolving the issues even the issues may not be caused by their work.
  4. It must be a regular recurrent task. In the initial stage of the project, the integration work may be rough due the the large delta changes. In the later stage, there may be very minor updates within a week. All this should not be an excuse to skip the release Wednesday.


Wednesday, April 26, 2017

Verilog TestBench Design

I have be working on my side project of digital circuit design in Verilog for a while. One thing that bugs me and pops up regularly is the way to write a test. I have a few options here:

  1. I can go for the old school style to embedded the test as blocks of Verilog code in the testbench. This give me the largest flexibility in terms of precise control of the testing environment. But this is also the most troublesome style. It requires the test author to code in Verilog. It exposes the complexity and diversity of the DUT to the test author. It is difficult to build and hard to debug. It also require re-compilation for even the simplest change in the test.
  2. Most comercial Verilog simulator supports a build scripting engine which allows an interactive user experience through the simulator consoles. These include VCS, Incisive and ModelSim. TCL is the most commonly supported scripting language in these EDA tools. So one can build the complete test environment around this technology. The largest advantage is the easy of coding in a familiar syntax while maintaining fine control over the DUT through the TCL commands provided by the EDA tool. No recompilation is required. I have seen this approach in practical environment for IP design. It works very well, as long as you have paid for the simulator. Unfortunately, I cannot find any free Verilog simulator supporting this, not to said that I have to run it on my Macbook Air.
  3. UVM+SV is the trendy thingy in SoC/IP design. I sure you that there are tons of information you can get from Google telling its advantages. But again, I cannot find a free (either as beer or for speech) Verilog simulator supporting this.
  4. The approach I used is adapted from the company I worked for. The implementation is completely different by the underlying idea is the same. To write a standard test vector generator which will read in a text based test file. It then alters the signals connected to the DUT based on parsing the text file line by line. A simple programming language with a handful of instructions and a text editor are all you need to create and modify a test. This works fine until you try more complicate tests with branch and arithmetic operations
Now I am working on another approach which:
  • doest not require recompilation of the Verilog code (this is very important as we cannot assume the person who creates and runs the test will have access to the same Verilog compiler, also, you can ship binary instead of source code to the testing environment)
  • has a simple programming interface. It may not be as flexible and mature as TCL, but it should support simple looping and arithmetic operations.
  • is independent from the Verilog compiler and simulator. This will ensure the portability of the tests and allow future extension.

Tuesday, November 18, 2014

deque v.s. vector performance

Tested on my Macbook Air (OS X 10.10, 1.7GHz i5, g++, -O3):

average of 100 runs :
  each push_back() 1,000,000 times,
  then pop_back() 1,000,000 times,
  item size 64-bit.

deque performance : 95 ms
vector performance : 50 ms

  cout << "scheduler TB" << endl;
  Event e;
  deque<Event> eb;                                                         
  //vector<Event> eb;

  cout << "size = " << eb.size() << endl;

  for (j=0; j<100; j++) {
    gettimeofday(&t0, NULL);
    for (i=0; i<1000000; i++)
      eb.push_back(e);
    for (i=0; i<1000000; i++) {
        e = eb.back();
        eb.pop_back();
      }
    gettimeofday(&t1, NULL);
    t_d += (t1.tv_sec - t0.tv_sec) * 1000000 + (t1.tv_usec - t0.tv_usec);
  }

  cout << "size = " << eb.size() << endl;
  cout << "INFO: TB finished in " << t_d/100 << " us" << endl;

  eb.clear();