科学引文索引(Science Citation Index、缩写:SCI)是由美国科学資訊研究所(Institute for Scientific Information,简称ISI)于1960年上线投入使用的一部期刊文献检索工具,其出版形式包括印刷版期刊和光盘版及联机数据库。科学引文索引由科睿唯安公司(Clarivate Analytics)运营。
目录
1影响
2SCI报道内容
3参见
4延伸閱讀
5外部链接
影响
科学引文索引以布拉德福(S. C. Bradford)文献离散律理论、以加菲尔德(Eugene Garfield)引文分析理论为主要基础,通过论文的被引用频次等的统计,对学术期刊和科研成果进行多方位的评价研究,从而评判一个国家或地区、科研单位、个人的科研产出绩效,来反映其在国际上的学术水平。因此,SCI是目前国际上被公认的最具权威的科技文献检索工具。
Borgman, Christine L.; Furner, Jonathan. Scholarly Communication and Bibliometrics (PDF). Annual Review of Information Science and Technology. 2005, 36 (1): 3–72. doi:10.1002/aris.1440360102.
Meho, Lokman I.; Yang, Kiduk. Impact of data sources on citation counts and rankings of LIS faculty: Web of science versus scopus and google scholar (Free PDF download). Journal of the American Society for Information Science and Technology. 2007, 58 (13): 2105. doi:10.1002/asi.20677.
Garfield, E.; Sher, I. H. New factors in the evaluation of scientific literature through citation indexing (Free PDF download). American Documentation. 1963, 14 (3): 195. doi:10.1002/asi.5090140304.
Garfield, E. Citation Indexing for Studying Science (Free PDF download). Nature. 1970, 227 (5259): 669–71. PMID 4914589. doi:10.1038/227669a0.
Garfield, Eugene. Citation Indexing: Its Theory and Application in Science, Technology, and Humanities. Information Sciences Series 1st. New York: Wiley-Interscience. 1979, 1983. ISBN 9780894950247.请检查|date=中的日期值 (帮助)
外部链接
加菲爾德表示:不能以SCI論文數量評價科學水平
International Science Index
Introduction to SCI
Master journal list
Chemical Information Sources/ Author and Citation Searches. on WikiBooks.
Cited Reference Searching: An Introduction. Thomson Reuters.
Ramiro Burr's New Blog - to go back: www.ramiroburr.com From Latin rock to reggaeton, boleros to blues,Tex-Mex to Tejano, conjunto to corridos and beyond, Ramiro Burr has it covered. If you have a new CD release, a trivia question or are looking for tour info, post a message here or e-mail Ramiro directly at: musicreporter@gmail.com Top Tejano songwriter Luis Silva dead of heart attack at 64 By Ramiro Burr on October 23, 2008 8:40 AM | Permalink | Comments (12) | TrackBacks (0) UPDATE: Luis Silva Funeral Service details released Visitation 4-9 p.m. Saturday, Rosary service 6 p.m. Saturday at Porter Loring, 1101 McCullough Ave Funeral Service 10:30 a.m. Monday St. Anthony De Padua Catholic Church, Burial Service at Chapel Hills, 7735 Gibbs Sprawl Road. Porter Loring (210) 227-8221 Related New Flash: Irma Laura Lopez: long time record promoter killed in accident NewsFlash: 9:02 a.m. (New comments below) Luis Silva , one of the most well-known ...
1 I having trouble getting my ResourceDictionary.MergedDictionaries to load from app.xaml. My WPF app has a static class with a Main defined and startup object set to it. Within Main I created an instance of App and run it. The override OnStartup fires and the mainwindow.cs InitializeComponent gives the error "Message "Cannot find resource named 'MaterialDesignFloatingActionMiniAccentButton'. If I put the resources in the mainwindow.xaml everything is fine, but I wanted them to load at the app level so I they are not in each page. Any help appreciated. public partial class App protected override void OnStartup(StartupEventArgs e) base.OnStartup(e); var app = new MainWindow(); var context = new MainWindowViewModel(); app.DataContext = context; app.Show(); from the Main.. var app = new App(); app.Run(); app.xaml.. <Application x:Class="GS.Server.App" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:...
up vote 2 down vote favorite There is a clear pattern that show for two separate subsets (set of columns); If one value is missing in a column, values of other columns in the same subset are missing for any row. Here is a visualization of missing data My tries up until now, I used ycimpute library to learn from other values, and applied Iterforest. I noted, score of Logistic regression is so weak (0.6) and thought Iterforest might not learn enough or anyway, except from outer subset which might not be enough? for example the subset with 11 columns might learn from the other columns but not from within it's members, and the same goes for the subset with four columns. This bar plot show better quantity of missings So of course, dealing with missings is better than dropping rows because It would affect my prediction which does contain the same missings quantity relatively. Any better way to deal with these ? [EDIT] The nullity pattern is confirmed: machine-learning cor...