In News

Earlier this month, Springer Nature Technology and Publishing Solutions published the Handbook of Cancer and Immunology, and it is a “must read” for anyone looking for a comprehensive yet accessible and up-to-date overview of the latest advances in cancer immunology and #immunotherapy.

We were delighted to work with Joe YEONG 杨宝诚 and colleagues Institute of Molecular and Cell Biology and Nanyang Technological University Singapore on the chapter: The Hurdle of Precision Medicine in Cancer Immunotherapy: Personalization Now or Then?

This chapter alone is a fascinating and important resource that raises important questions and suggestions for future research to advance this field.

Take a look: https://lnkd.in/dDpvq-AF

Congratulations to everyone involved in creating this important resource!

In Blog

At the start of the year, we launched our “fresh look” grant editing service. We have already started working with many of you and look forward to learning the outcome of your proposals later in the year. Meanwhile, IEL’s Director and senior editor Dr. Neil McCarthy has put together a three-part blog on “How to write winning grant applications”.

About Neil

Before finding out what Neil has to say in part one of his blog, let’s find out why he is ideally placed to provide IEL clients with some useful hints and tips on successful grant writing!

Many of you will know Neil as a senior editor with Insight Editing London (IEL), but he is also a Lecturer in Immunology / MRC Career Development Fellow in the Faculty of Medicine at Queen Mary University of London. Neil is Research Lead in The Blizard Institute’s Centre for Immunobiology, as well as Infection and Inflammation Lead in the cross-faculty Centre for Predictive in vitro Models. He has secured more than £1M in personal research funding to date, including research council grants, via charitable sources, and through various commercial projects. Let’s hear what Neil has to say, in part one of his blog, “The Wisdom of Crowds“.

The Wisdom of Crowds

“An excellent piece of advice for all aspiring grant writers is to START EARLY. On too many occasions, I have been asked to provide feedback on a rough draft proposal when the planned submission date is only a week or two away! Constructing an excellent grant takes a lot of time, and ideally a large amount of feedback from both specialist and non-expert reviewers collected along the way. This is vital to ensuring that you put forward the best possible case for support.

Following on neatly from this last point – do not listen to ALL the advice you are given simply because it has been offered. This may seem counter-intuitive, but remember that not all input you receive will be *good* advice. An important part of your job as the applicant is to discern the difference between a constructive / valid point and other comments that may be less valuable or even harmful to your case. When writing my own fellowship proposal, I received feedback ranging from ‘this looks great / submit right now!’ all the way to ‘you should start again with a blank sheet of paper’ (in both cases, these were comments on the final proposal that was ultimately submitted and funded). So, always be wary of extreme opinions – whether strongly positive or negative – since often these are unlikely to provide much useful information to help enhance your application.”

Stay tuned for part two that discusses the importance of “Time, Team and Tools”!

In Client successes

Publication success!

This latest paper we are delighted to announce comes from Jin Liu and colleagues, and describes a new method known as PRECAST that can integrate multiple spatial transcriptomics datasets from multiple tissue slides and possibly even multiple individuals.

As detailed in their Nature Communications paper, Jin Liu et al. show that PRECAST is computationally scalable and applicable to spatial transcriptomics datasets deriving from different platforms.

You can find out more about how PRECAST was developed and tested on both simulated and real datasets, here: https://lnkd.in/d7jetuK5

Well done to everyone involved in this important project – we are delighted to see it available to read online!

In Client successes

Publication success!

New findings from a single-cell RNA sequencing analysis of cervical cancer tissues reveal key factors involved in cervical cancer initiation and progression.

The study by Chao Liu and colleagues, published in January in Science Advances, provides great insight into the transcriptional programs underlying each stage of cervical squamous cell carcinoma (CESC). The researchers sequenced more than 75,000 cells isolated from human cancer tissues at various stages of malignancy. From here, they could observe the trajectory of cervical epithelial cells and the correlations between the abundance of specific myeloid, lymphoid and endothelial cell populations and CESC progression.

This is a fascinating study and a very interesting read: https://lnkd.in/eMAYu9nr

Well done to all those involved in this impactful study!

In Client successes

Inhibiting G9a/GLP improves engineered T-cell antitumor activity

The Insight Editing London team were delighted to see an early draft of this manuscript, before its submission to Nature Communications. Now accepted and available to read online, you can find out more about how researchers in Singapore and New York have aimed to improve the antitumor activity of engineered T cells.

Lam et al. found that short-term inhibition of G9a/GLP increases T-cell antitumor activity against hepatocellular carcinoma both in vitro and in a mouse model, by increasing granzyme expression and precipitating changes in pro-inflammatory gene expression.

Check out the full, open access article here: https://www.nature.com/articles/s41467-023-36160-5

This is a really interesting study with huge potential to change the efficacy of engineered T-cell therapy for many cancer patients. Congratulations to all those involved in this exciting work!

In Blog

Can we really use AI to write research papers?

 

As an editor, writer, and scientific researcher, I am following with interest the gaining momentum of artificial intelligence (AI) programs in the context of scientific writing.

AI tools are certainly causing a stir and even leading journals Nature and Science are at loggerheads over the best way forward.

Once I learned that Nature journal were accepting submissions that acknowledged AI-assisted writing tools, I decided it was time to check it out for myself. After reading various opinions on the subject, I decided that for the time being, these algorithms are likely best used for creating the “filler” text: introductions, summaries.

I initially asked one-such prominent AI tool (ChatGPT) to compose an introduction for a review paper on acute kidney injury (AKI) in children. I was pretty impressed – within a few seconds I had some fairly decent prose written in the style of a review article introduction. Sentences perfectly formed. A native tone. Ideal.

But the text was superficial. The introduction comprised just 126 words, of which a third described what the review would be about (based on the text I input).

Perhaps these algorithms needed more input than I thought. So, giving it the benefit of the doubt, I gave the algorithm a bit more information, asking it to include a discussion on the genetic basis of AKI in children.

I gained just 20 words on the original.

First impression – a good starting point but certainly not relieving me of the task of writing my introduction.

On to its next test. “Can you include some references in the introduction you have written?”, I asked. “Certainly!”, it boldly responded.

The same introduction came back but now with three reference citations repeatedly dotted about the text. I was shocked, as on first glance, we now had something approaching the full package. A fully referenced piece of novel, grammatically correct, text. The references were absolutely plausible – the journals were well known for the field, I knew the author names and the subject area matched their expertise. There was nothing to suggest anything was wrong.

But the editor and researcher in me checked these three references out. I could not find them anywhere.

So I asked ChatGPT directly, “Are these real references?”. The reply was adamant. “Yes, the references cited are genuine articles that have been published in the scientific literature…” The only concession ChatGPT made was that being a language model, it does “not have the ability to independently verify the accuracy of validity of the content of these references” (a perhaps even more important issue for a later discussion).

I searched again, and again came up with nothing. I asked ChatGPT three times and each time it maintained that the references were genuine, though it conceded that perhaps the citations contained typographical errors hence leading to my problem to find them. On the fourth attempt telling ChatGPT that the references were non-existent, the response was unexpected, to say the least.

ChatGPT returned to me the 200 words or so of introductory text, but this time with a disclaimer that the references referred to “are fictional and provided as an example only”.

Fourth time lucky. It took me half an hour to create some 200 words of falsely referenced text, and half an hour to get ChatGPT to admit it. Could I even trust those 200 words now?

Armed with this knowledge, I repeated the experiment on a different subject area. The same thing happened. This time, two out of three references were invented by ChatGPT. Now knowing how to probe ChatGPT for the truth, I quickly got the response that “the references were generated based on commonly cited sources in the scientific literature” and that “they do not appear to be accurate or credible sources of information”. But I had to probe for this answer many times before the truth came out.

My discovery, therefore, is quite profound. ChatGPT lied. Several times. Here, no harm was caused but I was committed to checking and double checking. Will everyone using these tools to save time be so persistent? In the real world, misattributing statements of fact to actual real researchers is dangerous and misleading. Moreover, if the content these tools create is also false, and then attributed to an active researcher…then what? The repercussions could be serious.

I have no doubt that these prototypes are going to develop into highly sophisticated tools that will have enormous benefits, uses and applications. But I urge caution, especially in the context of the biomedical sciences. As with most things, there are pros and cons with these tools, and while we are in the early days of their development, I suggest that you trust your own abilities rather than a computer to write your papers.

*My tests were based on ChatGPT Jan 30 Version. Free Research Preview. These tests were conducted in February, 2023.