My contention is that certain software development practices, including the heavy use of assertions, contribute to far more problems than they solve.
Further, run-time-checking with extensive and tunable logging, can help find the most devilish problems and provide an audit trail for your support team.
My simplified example of validating and logging everything (with complete error recovery) is:
...
do {
if ((rc = TableOpen(...)) != 0) {
Log("Couldn't open table");
break;
}
bTableOpen = TRUE;
if ((rc = TableLock(...)) != 0) {
Log("Couldn't lock table");
break;
}
bTableLocked = TRUE;
if ((rc = RecordLock(...)) != 0) {
Log("Couldn't lock record");
break;
}
bRecordLocked = TRUE;
if ((rc = RecordUpdate(...) != 0) {
Log("Couldn't update record");
break;
}
} while (0);
if (bRecordLocked) {
if ((rc1 = RecordUnlock()) != 0 && !rc) {
Log("Couldn't unlock record");
rc = rc1;
}
}
if (bTableLocked) {
if ((rc1 = TableUnlock()) != 0 && !rc) {
Log("Couldn't unlock table");
rc = rc1;
}
}
if (bTableOpened) {
if ((rc1 = TableClose(...)) != 0 && !rc) {
Log("Couldn't close table");
rc = rc1;
}
}
return (rc);
IOW, everything is checked _at run-time_. Period. And everything can be tracked, depending upon the verbosity level of your logging and desire to audit.
My key finding over 20+ years: few applications are so performance-intensive that you can't afford checking whether a pointer is NULL or not or verifying a return-code came back the right way.
IMO, a big reason most software is unreliable is that "go-naked" tools like ASSERTs are overused (so called, because is in RELEASE mode, you're going nekkid :-).
No comments:
Post a Comment