Academic journal article Journal of Small Business Management

Evaluating the Impact of SBDC Consulting: A Reply to Elstrott

Academic journal article Journal of Small Business Management

Evaluating the Impact of SBDC Consulting: A Reply to Elstrott

Article excerpt

EVALUATING THE IMPACT OF SBDC CONSULTING: A REPLY TO ELSTROTT

In a recent article we reported the results of one of the first systematic attempts to determine whether the Small Business Development Centers' (SBDC) consulting activities in Georgia and South Carolina were cost effective. The Georgia study was conducted between late 1982 and early 1983 using 1981-1982 data. The South Carolina study examined the 1982 time period in a manner similar to the Georgia study. Both studies, along with other selected studies of SBDCs, were presented as testimony in congressional hearings in 1983.

Our results suggested that the Georgia and South Carolina SBDCs were cost effective, returning tax dollars to the respective state governments and the federal government well in excess of the cost of providing the consulting services. The favorable cost/benefit projections were obtained despite the fact that we used conservative measures for sales tax and personal income taxes, considered only long-term cases (at least 12 hours of consulting) as the basis for projecting impact, and compared state and federal tax revenues generated solely by long-term SBDC cases with the cost of running the entire SBDC program (short-term cases, continuing education programs, research, information transfers, and so forth). Furthermore, we calculated the tax revenue accruing to government from successful businesses for one year only, whereas it is highly likely that such benefits would, extend over a number of years.

Recently, a study of the impact of the SBDC program was attempted in the state of Louisiana with our study serving as the primary research model. Problems encountered in this replication inspired a Journal of Small business Management article, which precedes this reply. The purpose of this article was to citique our methodology and to offer suggestions for improvement, as well as to report the results of the Louisiana study.

At the request of the JSBM editors, this reply was written in response to the criticisms in that article. We also describe the progress that was made between 1982, when the original study was begun, and 1985, when it was published.

While we do not dispute that some improvements could be made in our methodology, we do not believe that it is as inaccurate as the Elstrott article implies. For example, Elstrott criticizes our use of statewide average changes in sales, employment, and profits for the control group, stating that it "distorted the calculation of the incremental percentage change between the client sample and the control sample" because our control sample was more stable than our client sample. He goes on to argue that "developing a control sample more precisely matched than statewide averages is not a hopeless academic pipe dream," citing Robinson's study as an example.

Our study did, in fact, make such a trade-off when choosing statewide averages to represent the control group. Limitations of resources and time constrained our ability to obtain a control group similar to the one used by Robinson. However, the control sample we selected was operational and reasonably accurate. It provided a frame of reference against which parties interested in SBDC evaluation, particularly legislators, could find meaningful comparisons.

Elstrott's use of the term "stability" as a basis for criticism of our control group is not very meaningful, although the underlying idea--that statewide averages iclude very large as well as small firms--is a valid, although rather obvious, point. Both fact and previous research, however, suggest that this problem was not serious enough to invalidate our findings. For example, Pelham's comparisons of statewide averages in sales, employment, and profit growth with averages for small business (less than 20 employees) within that statewide database suggest that the differences are not significant, at least in Iowa. In addition, it is possible that, if bias did exist, the volatility of our client sample (as compared to statewide averages) actually lowered our impact estimates. …

Search by... Author
Show... All Results Primary Sources Peer-reviewed

Oops!

An unknown error has occurred. Please click the button below to reload the page. If the problem persists, please try again in a little while.